abstract
stringlengths 3
11.9k
| article
stringlengths 21
1.17M
|
---|---|
There is an increasing focus on the role of complexity in public health and public policy fields which has brought about a methodological shift towards computational approaches. This includes agent-based modelling (ABM), a method used to simulate individuals, their behaviour and interactions with each other, and their social and physical environment. This paper aims to systematically review the use of ABM to simulate the generation or persistence of health inequalities. PubMed, Scopus, and Web of Science (1 January 2013-15 November 2022) were searched, supplemented with manual reference list searching. Twenty studies were included; fourteen of them described models of health behaviours, most commonly relating to diet (n = 7). Six models explored health outcomes, e.g., morbidity, mortality, and depression. All of the included models involved heterogeneous agents and were dynamic, with agents making decisions, growing older, and/or becoming exposed to different health risks. Eighteen models represented physical space and in eleven models, agents interacted with other agents through social networks. ABM is increasingly contributing to our understanding of the socioeconomic inequalities in health. However, to date, the majority of these models focus on the differences in health behaviours. Future research should attempt to investigate the social and economic drivers of health inequalities using ABM. | Introduction
Systematic socioeconomic inequalities in health persist and continue to widen within many economically prosperous countries across the globe [1,2]. The socioeconomic gradient in health remains one of the main challenges for public health as socioeconomically disadvantaged individuals have a lower life expectancy and a higher risk of developing life-limiting illnesses, such as diabetes and cardiovascular disease, compared to their advantaged counterparts [3,4].
The theories and frameworks developed to understand the causes of and solutions to the socioeconomic gradient in health are undoubtedly complex. For example, the World Health Organization's (WHO) Commission on the Social Determinants of Health (CSDH) developed a conceptual framework to illustrate the relationship between the social determinants of health and equity in health and wellbeing, which was multi-level and contained feedback loops [5]. The CSDH framework highlights the multi-faceted nature of inequality from the impact of the socioeconomic and political context to psychosocial factors and biology. Thus, there is an increasing recognition that health inequality is a complex or 'wicked' problem and systems simulation models are a useful tool to understand the underlying causes and mechanisms [6].
Complex systems are systems which consist of interacting parts or subsystems. Some key characteristics of complex systems are dynamic, resulting in adaptation to change, non-linear relationships, feedback loops, tipping points, and the emergence of macrophenomena from interactions at the micro level (see, e.g., CECAN 2018) [7]. It is difficult to capture these relationships using a traditional epidemiological "risk factor" approach which uses linear reductionist models to test the relationships between decontextualised dependent and independent variables [8]. Agent-based modelling (ABM), a well-established methodological approach used widely in the field of social science, has been highlighted as a methodological approach that can be used to address this problem [6]. ABM involves simulating the actions and interactions of individual agents with other agents and their environment based on a set of specified rules and observing emergent phenomena [9]. Agents may adapt their own behaviour in response to previous behaviour, their social network, or environmental stimuli [9]. Not only can ABM be used to understand complex phenomena, but they can also be used to test the impact of policy interventions and inform policy decisions and have been successfully applied in other areas of public health, particularly for the control of infectious diseases [10].
ABM has been used successfully to understand the causes of inequality more broadly outside the field of public health. Famously, the Schelling model of segregation which identified residential segregation is generated in the presence of relatively simple nearest neighbour preferences and could be used to understand the racial segregation patterns in the USA [11]. Additionally, the Sugarscape model developed by Epstein and Axtell has offered insights into the generation of wealth inequality using a relatively simple model which simulates the land in which sugar is grown and can be harvested by individuals to become their wealth [12,13]. Individuals in the simulation are programmed to harvest the sugar closest to them; strikingly, even when the wealth available to all individuals at the beginning of the simulation is equal, trends in wealth inequality are produced even after a short simulation period. Additionally, only a very small proportion of individuals have high levels of wealth, while a much larger proportion have low levels of wealth. These models, alongside many others developed in the field of social science, have illustrated the benefits of using ABM to understand complex observable phenomena.
A review by Speybroeck and colleagues, covering research published before January 2013, explored how simulation models had been used in the field of socioeconomic inequalities in health specifically [14]. They found only four ABM studies, which focused on understanding differences in health behaviour or infectious disease transmission between socioeconomic groups. Speybroeck and colleagues concluded that ABM is the most appropriate computational modelling method to examine health inequalities as they can incorporate all the characteristics of a complex system such as the heterogeneity, interactions, feedback, and emergence [14]. However, while the four identified models contained many of the expected features of ABM (e.g., multi-level, dynamic, and stochastic), the Speybroeck review concluded that to better understand the complex mechanisms underlying health inequalities, more ABM that features feedback loops, temporal changes, and agent-agent and agent-environment interactions are required.
Since the Speybroeck review, there has been a methodological shift towards using complex system methods in public health and public policy, much supported by large investments in data accessibility and computing power. In the UK, this is also reflected in the Medical Research Council's updated guidance for the development and evaluation of complex interventions [15] and the Her Majesty Treasury's Magenta Book Annex "Handling Complexity in Policy Evaluation", both published in 2021 [16]. This methodological turn has resulted in a significant increase in computational modelling papers in the public health literature in recent years; therefore, it is now timely to update and deepen the previous review. Here, we focus on the contribution of ABM to understand the socioeconomic inequalities in health specifically, by reviewing the application area (e.g., the inequality mechanisms studied, the choice of the health outcome(s), and the measure of socioeconomic position), and the details of the ABM approach (e.g., the represented complexity features and whether models have been validated). The aim of this review was to synthesise the growing evidence based on the use of ABM in the field of health inequalities research.
---
Materials and Methods
We followed the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [17]. The protocol for this review was developed and registered on the International Prospective Register of Systematic Reviews (protocol registration PROSPERO 2022 CRD42022301797). PubMed, Scopus, and Web of Science were searched from 1 January 2013 to 15 November 2022. The Scopus search was limited by subject area to Medicine; Social Sciences; Computer Science; Multidisciplinary; Mathematics; Nursing; Economics, Econometrics and Finance; Neuroscience; Health Professions; Psychology; Decision Sciences; and Engineering. For Web of Science, searches were made of the editions of Science Citation Index Expanded and Social Sciences Citation Index. For both Web of Science and PubMed, only the titles and abstracts were searched. An extensive list of search terms was used (see Table S1 in Supplementary Materials) to capture the themes of simulation modelling, socioeconomic inequality, and health. The search strategy was validated against that used in the Speybroeck review [14], confirming that all ABM studies included in that review also appeared using our search strategy.
---
Eligibility Criteria
Table 1 lists the inclusion criteria for this review; this criterion includes the population, exposures, comparisons, outcomes, and study designs (PECOS) required for a study to be eligible for inclusion. Studies were included if they: (i) were full papers published in English, and (ii) the paper described an ABM study with the purpose to understand the emergence and/or persistence of health inequalities in relation to either non-communicable disease or the differential response of different socioeconomic groups to health-related interventions. Papers were only included if they simulated human individuals or groups and investigated within-country socioeconomic inequalities (using measures such as the socioeconomic position, income, and education) in health, restricted to the differences in the health status, health behaviour, or access to healthcare. Papers in which healthy food access was modelled as a proxy for the consumption of healthy food were also included. Studies that developed ABM in combination with system dynamics or population-based models were included. There were no geographical restrictions. Papers that modelled communicable diseases or water or food access/security as the health outcomes were outside the scope of this review and were therefore excluded. The studies published before 2013 were also excluded as these studies were covered in the Speybroeck review [14].
---
Screening
Searching returned a total of 2533 records. All the records were downloaded to EndNote X9 and imported to the EPPI-Reviewer. The total records were reduced to 1436 following the removal of duplicates. An initial screening was carried out by one reviewer (RW). Following title screening, 477 records were identified for abstract screening. A second reviewer (JB) independently double-screened a randomly selected subset of abstracts (20%). After title and abstract screening, 51 records were selected for full-text screening and 18 of these met the eligibility criteria for data synthesis (Figure 1). The second reviewer (JB) also independently screened all the selected full-text studies to validate that the included papers met all the eligibility criteria. Any disagreements were recorded and discussed to ensure consistency. Two further reviewers (CE and AH) assisted with the screening for papers queried on methodological grounds (n = 29), in instances where it was uncertain whether a simulation model met the inclusion criteria. Manual reference searching identified two additional papers which met the inclusion criteria, giving a final sample of 20 included studies.
---
Screening
Searching returned a total of 2533 records. All the records were downloaded to End-Note X9 and imported to the EPPI-Reviewer. The total records were reduced to 1436 following the removal of duplicates. An initial screening was carried out by one reviewer (RW). Following title screening, 477 records were identified for abstract screening. A second reviewer (JB) independently double-screened a randomly selected subset of abstracts (20%). After title and abstract screening, 51 records were selected for full-text screening and 18 of these met the eligibility criteria for data synthesis (Figure 1). The second reviewer (JB) also independently screened all the selected full-text studies to validate that the included papers met all the eligibility criteria. Any disagreements were recorded and discussed to ensure consistency. Two further reviewers (CE and AH) assisted with the screening for papers queried on methodological grounds (n = 29), in instances where it was uncertain whether a simulation model met the inclusion criteria. Manual reference searching identified two additional papers which met the inclusion criteria, giving a final sample of 20 included studies.
---
Data Extraction
Data from the papers were extracted by one reviewer (RW). A second reviewer (JB) assessed the accuracy of the data extraction for all the included studies. In the case of a disagreement, both reviewers referred to the paper in question, and a consensus was reached. A data extraction matrix was developed which included the basic characteristics of the studies (the year, location, and study's aims), variables modelled (socioeconomic measure and health outcome), model characteristics (multi-level, dynamic, feedback loop, stochastic, spatial, heterogenous, agent-agent interaction, and adaptation to environment), if and how the model was validated, the model's function (framework development and/or to test an intervention/scenario), and the relevant findings. The model's characteristics were not always explicit but could be derived from the methods section. The relevant findings were defined as those related to health or intervention outcomes stratified by a measure of the socioeconomic position.
---
Quality Assessment
Given the lack of an appropriate quality assessment or a risk of bias assessment tool to assess ABM, a quality assessment was not conducted, but we recorded the compliance with the reporting guidelines of the ODD (the overview, design concepts, and details) [18].
---
Analysis
Descriptive summary statistics were used to describe the search results and study's characteristics. We describe the specific modelling details of the included studies using a narrative synthesis in which we group models based on the health outcome.
---
Results
---
Descriptive Analysis
The study characteristics for the 20 included papers are displayed in Table 2. The most common geographical settings for the models were the USA (n = 7) and the UK (n = 4). The other models were set in the Netherlands, Mexico, India, South Korea, Canada, and Japan. Only two models were abstract and did not have a geographical setting. Most of the included models were set at the city level (n = 10), other settings included the national (n = 5), state (n = 2), and district level (n = 1).
Most of the included papers described the ABM of the socioeconomic differences in health behaviour (n = 14). Three papers focused on explaining the socioeconomic differences in the physical health outcomes and three papers modelled a mental health outcome. The measures of the socioeconomic position covered the income (n = 14), educational attainment (n = 4), social grade (n = 2), and wealth (n = 1).
All of the included models were multi-level (they represented both individuals and structural entities), dynamic (captured changes over time), stochastic (based on probabilities), and had heterogeneous agents. Most models represented both the individuals and the environment with environmental features (e.g., shops, green spaces, and public transport). Often, in the models, agents could age, die, and change their behaviour over the course of the simulation. Only three papers used the ODD reporting guidelines when writing descriptions of their ABM [18]. Examine the impact of a free bus policy on public transit use and depression among older adults.
Individual Income-divided into quintiles (1 to 5).
Prevalence/ percentage of agents with depression.
ML-multi-level. D-dynamic. St-stochastic. FL-feedback loop. Sp-spatial. HtI-heterogeneous individuals. AI-agent-agent interactions. EI-agent-environment interactions. V-validation. F-framework. I-test an intervention.
---
Health Behaviours
Most models with a focus on the health behaviour modelled dietary behaviours (n = 7). Four of the models were concerned with physical activity and the access to green space, and three modelled substance use, specifically the consumption or purchase of alcohol and tobacco as a proxy for consumption.
---
Dietary Behaviour
Papers that used ABM to model the socioeconomic differences in dietary behaviours tested the impact of interventions on the consumption of sugar-sweetened beverages [19], the purchase of ultra-processed food [20], the consumption of fruits and vegetables [21,22], and the access to healthy food outlets [23]. The interventions were educational campaigns (e.g., nutrition warnings and school-based programmes), advertising campaigns, changes to tax, increasing access to vegetables, and reducing the cost of vegetables. However, two papers focused on the impact of residential segregation on the access to healthy food outlets as an explanation for the socioeconomic differences in dietary behaviours [24,25]. All models used the level of income of the individual or household, educational achievement, or both as the measure of the socioeconomic position.
The only paper that did not include a spatial component to the model was set at a national level, and explored the impact of tax, nutrition warnings, and advertising on the purchase of ultra-processed food in Mexico [20]. The other six models used artificial grid space [24], a 1-dimensional linear township [25], a raster map to represent the spatial distribution of income [21], or actual geographic space, including GIS modelling of real-life cities [19,22,23]. Six of the models included agent-environment interactions which often captured how individual agents engage with food outlets [21][22][23][24][25]. Only two of the included papers modelled agent-agent interactions through dietary social norms operationalised via a social network which influenced the taste preferences and health beliefs [22], and the purchasing of ultra-processed foods [20]. Five of the models featured feedback loops, these included the update of social norms based on behaviour over the course of the simulation [20,22], and the food outlet responses to the agent's behaviour by closing and opening outlets [23], changing the type of food available for sale [24], or an increasing appetite and overeating following the consumption of foods high in fat, sugar, and salt [21]. Only two of the papers had attempted validation by comparing the simulated outcomes to the 'observed' outcomes in real world data [19,22].
---
Physical Activity and Use of Urban Green Space
All the models that investigated the socioeconomic differences in physical activity simulated intervention scenarios. These scenarios included additional physical education in schools, the promotion of active travel, educational campaigns, increasing the availability and affordability of sports activities, improving neighbourhood safety, and increasing the expense associated with driving [26][27][28]. All the models focusing on physical activity used the level of income of the individual or household as the measure of the socioeconomic status and explored a range of physical activity-related outcomes including the minutes of physical activity per day [26], sports participation [27], and walking [28]. Models concerning physical activity involved a spatial component operationalised as either a representation of the actual geographical space [26,27] or an artificial grid [28]. All the models simulated both agent-agent interactions (e.g., social interactions modelled via a social network that impacts behaviour) and agent-environment interactions (e.g., playing outdoors or engaging with sports facilities in the environment). Two models contained feedback loops, including the updating of social norms regarding exercise and travel preferences [27,28] and environmental feedback, including the safety and traffic levels of travel routes on the attitudes towards transport methods [28]. Two models were validated by comparing the simulated outcomes to the outcomes observed in the pre-existing data [26,28].
One paper modelled intra-and inter-city inequalities in visiting urban green spaces, specifically testing the mechanism that the decision to visit these spaces is influenced by an individual's assessment of who had previously visited the space [29]. Given conflicting evidence, the model explored two possibilities: (1) that agents visit spaces that people like themselves to visit (homophilic preference) and ( 2) that individuals with a lower SES (socioeconomic status) prefer to spend time in areas which those of a high SES visit (heterophilic preference). This model used the occupational grade to classify the agents into either a high or low SES. The model spatially represented the cities of Edinburgh, Glasgow, Aberdeen, and Dundee, and simulated both agent-environment interactions in the form of visiting urban green spaces and agent-agent interactions via agents assessing the similarity of other agents visiting the green space. The feedback loop in this model was the update of who visited green spaces, which was a function of the update to whether 'in-' or 'out-' group members were present in those spaces. Given a lack of observed data, the model was validated using a pattern matching approach; the model could reproduce the observed patterns of urban green space visitation in a spatial microsimulation of Glasgow.
---
Substance Use
Two of the models that focused on the socioeconomic differences in substance use tested the impact of interventions, including alcohol taxation [30] and the restriction of menthol cigarette sales and tobacco retailer density [31]. One paper simulated several counterfactual scenarios which varied the degree of socioeconomic disparity and genderrelated susceptibility to social influence in the context of smoking [32]. All the models used the income level of the individual or household as the measure of the socioeconomic position and investigated substance use in the form of smoking prevalence [32], tobacco purchasing [31], and the average number of alcoholic drinks per day [30].
Two models simulated agent-agent interactions including the influence of gender and socioeconomic social norms on an individual's own smoking [32], and social network influences on drinking behaviour [30]. Two models were spatial; one represented the city of New York [30] and the other an abstract town called 'Tobacco Town' [31]. Two models simulated agent-environment interactions, such as travelling to and from locations and engaging with tobacco and alcohol retail outlets [30,31]. One paper not only focused on the consumption of alcohol but also examined the interaction between neighbourhood characteristics, social networks, sociodemographic characteristics, drinking, and violence [30]. Two models featured feedback loops in the form of updates to norms based on drinking and smoking behaviour [30,32]. One model validated the simulated outcomes by comparing these to the outcomes observed in real-world data on the prevalence of smoking in Japan [32].
---
Physical Health
Of the three models that focused on physical health outcomes, one examined the incidence of severe neonatal morbidity and deaths per 1000 live births averted [33], one looked at the health status and care need [34], and the other investigated the impact of an exposure to air pollution on the health status [35]. Two of the papers modelled the effect of potential interventions on the physical health outcomes [33,34]. In one, the intervention was the alteration of the eligibility criteria for government-funded social care, in the other increasing the responsibilities and coverage of community health workers. All the models used a different measure of the socioeconomic position including wealth quintile [33], approximated social grade [34], and educational attainment [35].
All three models included the individual and household levels and two included additional levels such as kinship networks and the regional level. One study represented space using a grid based on the geography of the modelled country [34] and two represented the actual geographic space [33,35]. An interaction with the environment was in the form of migration, seeking treatment at facilities, and the exposure to pollution. Only two models included a feedback loop, including feedback between the parental income level and childhood educational attainment and feedback between the level of disease and the probability of developing a further disease [34,35]. Only one model involved agents interacting with each other, in the form of a kinship network, which consisted of familial relationships [34]. None of these models validated their results using real world data. One of the models was used to create a complex theoretical framework to represent the social care system. The geographical and population data inputted into this framework could then be adjusted to model and understand the drivers of the unmet social care need in different countries [34].
---
Mental Health
Two of the three papers focusing on a mental health outcome examined the impact of transport on depression among older adults. The first examined the impact of multiple transport interventions [36], and the second examined that of a free bus policy on public transit use and depression [37]. An individual's income was used as a measure of the socioeconomic status in both papers.
One model carried out three experiments: increasing the walkability and safety of neighbourhoods to promote walking; decreasing bus fares and bus waiting times; and adding bus lines and stations [36]. While the second model focusing on transport carried out four experiments: altering mean attitudes towards the bus; bus waiting times; the cost of parking; and fuel prices; each experiment was also carried out with and without the inclusion of the free bus policy [37]. Both models captured the individual and neighbourhood level. A feedback loop resulted in improved attitudes towards a certain mode of travel following the positive experience of that mode. The spatial element was applied to income segregation patterns. In one model, the agents interacted with each other in the form of social networks influencing the travel behaviour [37]. In both models, agents interacted with the environment by using transport. Both models were validated against empirical data on the prevalence of depression in the United States by gender, age, and income level.
The third paper examined the impact of reducing income inequality on depression among expectant mothers [38]. Four interventions to increase income were tested: two child benefit programs (ACB and CCB), universal basic income (UBI), and increasing minimum wage. This model focused on individuals, and while it captured the neighbourhood characteristics for each individual (e.g., a sense of safety and the prenatal services available), the environment was not spatially represented in the model. Agents could decide to make or break social connections with other agents, and whether to break ties with other agents with depression. This model was not validated.
---
What Can ABM Tell Us about Socioeconomic Inequalities in Health?
Studies investigating the explanations for the socioeconomic differences in health found that those of a higher socioeconomic position were more likely to be exposed to healthier environments and therefore engage in healthier behaviours and have better health outcomes. For example, one model found that a greater income segregation in communities led to a decreased access to healthy food for lower income households [25]. Another study which modelled agents' movements from work to home found that regardless of the level of air pollution, those with a lower level of education consistently had the highest risk of developing an illness [35].
Models which tested the impact of interventions on the socioeconomic inequalities in health found that some interventions increased inequalities. For example, those of a high socioeconomic position improved their health behaviour more in response to educational campaigns concerning nutrition [22,23]. It was argued that nutritional education campaigns may be ineffective for those of a lower socioeconomic position due to a sensitivity to food prices and a lack of access to healthy alternatives [22]. Similarly, it was found that the promotion of active travel had greater benefits for those of a high socioeconomic position, as they are more likely to travel by car and travel by car more often to extra-curricular activities prior to the intervention [26].
However, there were some modelled interventions that decreased the socioeconomic inequalities in health. For example, one model tested the impact of a sugar-sweetened beverage tax and found that at 25% tax, the reduction in the consumption of sugar-sweetened beverages was greater in those from low-income populations [19]. This finding was largely the result of increases in price which made sugar-sweetened beverages less affordable to low-income households. Another study, which modelled the expanded responsibilities and increased coverage for accredited social health activists who perform postnatal check-ups, found that these interventions resulted in greater decreases in the neonatal morbidity and mortality among those of a low socioeconomic position [33]. Yang and colleagues also showed that, in older adults when attitudes towards bus use improved and the waiting time decreased, decreases in depression were estimated to be greater among low-income groups [37]. This larger increase was because those of a low income are less likely to own cars and are therefore more susceptible to an intervention to increase the uptake of public transport, which increases the number of non-work trips they take, which was beneficial to their mental health.
---
Discussion
This review included 20 papers that described the ABM of the socioeconomic inequalities in health that have been published since January 2013, the end point of the Speybroeck review which found only four ABM studies on the topic [14]. Using ABM in the context of socioeconomic health inequalities was most common in the USA and UK (n = 11). The included studies illustrated that ABM is a useful tool to understand complex problems and has been used flexibly to represent dynamic, multi-level processes, often in physical space, and to capture the interactions between individuals and interactions with their environment. These models can tell us about the causes of health inequalities, potential interventions to reduce health inequalities, and which interventions may inadvertently increase health inequalities.
Typically, ABM has been used to explore socioeconomic differences in health behaviours (n = 14) including diet, physical activity, access to green space, and substance use, but few have approached the socioeconomic differences in physical and mental health outcomes. Additionally, only one paper modelled access to healthcare as a potential explanation for socioeconomic inequalities in health [33]. To an extent this is unsurprising, given a historic focus on health behaviours in public health [39] coupled with the fact that ABM as a method captures how behaviours at the micro-level give rise to emergent phenomena at the macro-level [40].
Most ABM were used to test a range of interventions (n = 14), from educational campaigns to taxation, and were underutilised for other purposes, such as testing the explanatory value of the theory or mechanisms to explain the generation or persistence of socioeconomic inequalities in health. This is consistent with the Speybroeck review which found that all ABM studies were used to test an intervention or scenario [14] and highlights that a valuable feature of ABM is the ability to experiment and test a range of interventions in silico [40].
Less than half of the included studies (n = 9) attempted to validate their models and, to varying degrees, some using observational data or pattern matching methods. However, none of the included studies used structural validation techniques which would ensure that it is the intended "structure of the model that drives its behaviour" [41]. This finding is consistent with the Speybroeck review which found that only one ABM had been validated using observational data [14]. Additionally, only three of the included papers explicitly used and referred to the ODD protocol, the guidelines with the purpose of ensuring that ABM is described fully to facilitate its replication [18].
It is clear from the findings of this review that most existing ABM studies investigating the socioeconomic inequalities in health have focused on health behaviour. This individualistic focus on health inequalities in ABM efforts on this topic so far is not reflective of ABMs in the field of social science more generally. ABM has been used to understand broader social phenomena such as racial segregation and the generation of wealth inequality at the societal level [11][12][13]. While these patterns are generated by individual-level behaviours, these models do not seek to explain these behaviours. Reducing health inequality to understanding the differences in health behaviour is problematic given that research has shown that for the same level of any given behaviour, the health outcomes remain worse for the most socioeconomically deprived [42].
---
Limitations
Currently there is no available tool to assess the quality of ABM studies, and therefore we could not ensure that the models included in this review were of a high quality. There are a variety of quality assessment tools available to assess other study types, for example, the appraisal tool for cross-sectional studies (AXIS) which can be used to assess a study's design, reporting quality, and risk of bias [43]. Given the increase in ABM studies in public health, it is critical to consider how we will assess the quality of these studies going forward.
While the Speybroeck review considered a breadth of simulation models [14], we chose to focus on ABM only, given the particular promise of ABM applied to health inequality research and the rapid increase in the use of simulation modelling techniques since 2013 [10]. The application of alternative simulation modelling techniques (e.g., microsimulation and system dynamics) to the socioeconomic inequalities in health in recent years awaits a further examination.
---
Future Research
Efforts thus far to use ABM to understand socioeconomic inequalities in health have focused on the contribution of health behaviour. However, this focus on health behaviour is at odds with calls from researchers to "move beyond bad behaviours" [44] and the position of influential public health organisations. For example, the WHO concluded that it is the underlying social and economic factors that determine health and health inequalities as opposed to health behaviours [45]. We are increasingly aware that health inequalities are not only the result of differences in health behaviour, yet little has been done using ABM to attempt to understand the complex relationships between the social and economic environment people live in and the influence on their health via pathways other than health behaviour.
There are explanations for socioeconomic inequalities in health that shift the focus from individual-level behaviours to the social determinants which themselves determine health and to an extent behaviour (e.g., the social determinants of health) [45]. Existing ABMs have started to look at the social drivers of health behaviours (e.g., the role of social network and social norms) [20,22], however they avoid alternative pathways through which social and economic factors directly or indirectly impact health. It has been argued that ABM could be used to investigate the mechanisms specified in social and economic explanations for health inequality [46]. An existing hypothetical example of how this may be done is the operationalisation of psychosocial theory [46]. Instead of a focus on health behaviours, operationalising psychosocial theory would involve simulating support seeking and giving among friendship networks which mediates health outcomes via stress pathways. Future research should consider how we can use ABM to simulate alternative mechanisms which could explain the socioeconomic inequalities in health that are not exclusively focused on health behaviour.
---
Conclusions
In recent years, ABM has increasingly been used to explain socioeconomic inequalities in health. ABM allows us to develop a deeper understanding of the complex consequences of individual heterogeneity, spatial settings, feedback, and adaptation resulting from agent interactions with each other and their environment. However, to date, much of the focus has been on understanding the role of health behaviours. The features of ABM provide the opportunity to investigate alternative, more complex explanations for socioeconomic health inequalities. Therefore, an important next step in public health is to attempt to operationalise explanations for the causes and consequences of health inequalities beyond representations of health behaviour.
---
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph192416807/s1, Table S1: Systematic Search Strategy.
---
Data Availability Statement: Not applicable.
---
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. |
The present study examined how emotional fit with culture -the degree of similarity between an individual' emotional response to the emotional response of others from the same culture -relates to well-being in a sample of Asian American and European American college students. Using a profile correlation method, we calculated three types of emotional fit based on self-reported emotions, facial expressions, and physiological responses. We then examined the relationships between emotional fit and individual well-being (depression, life satisfaction) as well as collective aspects of well-being, namely collective self-esteem (one's evaluation of one's cultural group) and identification with one's group. The results revealed that self-report emotional fit was associated with greater individual well-being across cultures. In contrast, culture moderated the relationship between self-report emotional fit and collective self-esteem, such that emotional fit predicted greater collective self-esteem in Asian Americans, but not in European Americans. Behavioral emotional fit was unrelated to well-being. There was a marginally significant cultural moderation in the relationship between physiological emotional fit in a strong emotional situation and group identification. Specifically, physiological emotional fit predicted greater group identification in Asian Americans, but not in European Americans. However, this finding disappeared after a Bonferroni correction. The current finding extends previous research by showing that, while emotional fit may be closely related to individual aspects of well-being across cultures, the influence of emotional fit on collective aspects of well-being may be unique to cultures that emphasize interdependence and social harmony, and thus being in alignment with other members of the group. | INTRODUCTION
While early research has conceptualized emotions as largely intrapersonal experiences that take place within individuals, emotions are also social (Parkinson, 1996) and emerge from dynamic interactions between individuals and their social environment (Campos et al., 1989;Lazarus, 1991;Mesquita, 2010). Because the social environment is culturally constructed, the interaction between individuals and their social environment can lead to variations in emotional experiences across cultures (Markus and Kitayama, 1991;Mesquita and Frijda, 1992). At one level, this cultural
---
Emotional Fit and Individual Well-Being
There is growing evidence to support the notion that experiencing similar patterns of emotions to others within the same culture is important for individual well-being (De Leersnyder et al., 2014, 2015). In a series of studies, De Leersnyder and colleagues directly measured, rather than inferred, emotional fit with culture by using a profile correlation approach -correlating each individual's pattern of emotions in response to different situations with the average emotional pattern of the group. They then assessed the association between emotional fit and well-being in three different cultures (United States, Belgium, and Korea). Their results revealed that having higher emotional fit in relationship-focused situations (i.e., situation that involved relationship with others) was associated with greater relational well-being (i.e., having good interpersonal relationships) across all cultures (De Leersnyder et al., 2014). Emotional fit also predicted psychological wellbeing across cultures, although the specific contexts in which emotional fit mattered varied depending on culture (i.e., relationship-focused situations in Korea, and self-focused situations in the United States; De Leersnyder et al., 2015). These findings suggest that although there may be some cultural variability in how emotional fit relates to individual well-being, emotional fit is generally important for well-being at some basic level across cultures.
Evidence from research examining cultural norms and wellbeing further support this point. Being in alignment with the normative practice of one's own culture is important for individuals' adjustment and well-being (Oishi and Diener, 2001;Kitayama et al., 2010). While the cultural mandates for well-being may vary across cultures, it is universal for people to achieve well-being through actualizing their respective cultural mandates. For example, actualizing values of autonomy and personal control would lead to well-being in Western culture, whereas actualization of the values of interdependence and relational harmony leads to well-being in East Asian culture. In a crosscultural study comparing Americans and Japanese, it was indeed shown that personal control was the strongest predictor of wellbeing in the United States, but the absence of relational strain was most predictive of well-being in Japan (Kitayama et al., 2010). Similarly, attaining relational goals, and thus actualizing cultural mandates of interdependent culture was closely associated with well-being among Asian Americans and Japanese, but not among European Americans. In contrast, attaining independent goals was related to well-being in European Americans but not among Asian Americans or Japanese (Oishi and Diener, 2001). In sum, these studies suggest that fitting with norms of cultures is important for achieving individual well-being regardless of one's cultural orientation, even if those norms vary from culture to culture.
---
Emotional Fit and Collective Identity
Parallel to the individualistic focus on the conceptualization and study of emotions as an intra-individual phenomenon, studies of well-being and adjustment have also traditionally emphasized the individualistic, personal aspects of well-being (e.g., personal self-esteem). Yet, individuals' well-being and adjustment are also closely related to the collectivistic aspects of the self. For example, having a positive collective identity, indexed via collective self-esteem -the tendency to have a positive view about one's group identity -has been found to be associated with psychological well-being (Crocker et al., 1994). This relationship was evident especially in Asians (vs. European Americans) even after controlling for the effect of personal selfesteem, reflecting the greater emphasis on the group and group experiences in Asian culture. Given that collective identity may be an important index of well-being that complements the index of individualistic well-being, the current study focuses on the relationship between emotional fit and collective identity (i.e., collective self-esteem and identification with one's group) in addition to the individualistic indices frequently used in studies of well-being (i.e., life satisfaction and depression).
Previous research suggests that the experience of shared emotions with group members is important for constructing a positive group identity (Livingstone et al., 2011;Páez et al., 2015). For instance, Páez et al. (2015) found that perception of emotional synchrony while participating in collective gatherings (i.e., folkloric marches and protest demonstrations) led to greater collective self-esteem and increased identity fusion with the group. Similarly, in a laboratory study that employed an experimental manipulation of emotional fit with pre-existing and arbitrary groups, participants with increased emotional fit with the group indicated greater identification with the group, even when the group was created arbitrarily and carried no real meaning (Livingstone et al., 2011). On the other hand, some research also suggests that group identification may lead to shared emotional experience as well (Weisbuch and Ambady, 2008;Tanghe et al., 2010). For example, Tanghe et al. (2010) showed that increasing group identification through a laboratory manipulation led to greater similarity in emotional experience among group members.
While these studies suggest that emotional fit may be generally important for achieving positive collective identity (higher collective self-esteem and stronger identification with a group), studies have not yet examined cultural differences in how emotional fit relates to collective identity. However, crosscultural theorists have long discussed how one's sense of self is closely tied to others in interdependent cultures, whereas it is construed more independently in independent cultures (Markus and Kitayama, 1991). Thus, it follows that collective identity should be affected by the degree of shared experiences with group members to a greater extent in interdependent cultures than in independent cultures, making the link between shared emotional experiences (i.e., emotional fit) and collective self-esteem and group identification especially pronounced in East Asian culture.
---
Broadening the Assessment of Emotional Fit
Previous work on emotional fit has primarily focused on similarity in the patterns of subjective (i.e., self-reported) emotional responses between an individual and a reference group. The current study takes a multi-method approach to the assessment of emotions, and therefore to the measurement of emotional fit. We see emotions as a multi-componential construct that comprise subjective, behavioral, and physiological responses. Although some theories of emotion assume response coherence across the various components of an emotional response (e.g., Ekman, 1992;Levenson, 1994), empirical support for the response system coherence is largely inconsistent. Recently, a dual-process perspective on emotion response coherence has been proposed to reconcile this inconsistency (Evers et al., 2014). This framework suggests two relatively independent emotion systems: one automatic system that is relatively unconscious and fast (e.g., physiological response) and another reflective system that is relatively conscious and deliberate (e.g., subjective and behavioral responses). While the two emotion systems are thought to work together to promote adaptive behaviors (Baumeister et al., 2007), the response coherence between the two systems tends to be weak or nonexistent in contrast to the coherence evident between varying indicators within each system (Evers et al., 2014). This lack of coherence suggests that emotional fit in one of these response domains may not necessarily be associated with emotional fit in another.
The potential variability in emotional fit across emotional response domains (subjective, behavioral, and physiological) may also carry important implications for how emotional fit plays out in different cultures. According to Levenson's biocultural model of emotion (Levenson, 2003;Levenson et al., 2007), self-reports of subjective experience are highly susceptible to cultural influences, facial expressions are somewhat susceptible to cultural influences, and physiological response tendencies are relatively uninfluenced by culture. Because self-reports and behavioral expressions of emotions are visible and can directly influence social interactions, these may need to be modulated according to cultural norms more so than physiology. Therefore, emotional fit with culture may be more likely among subjective and behavioral response domains than in physiological responses. These ideas have yet to be examined empirically, however, because of the narrow interpretation of emotional fit in the literature.
Given the complexity of emotional experiences and varying cultural influence on emotion systems, the current study sought to broaden the concept of emotional fit by using assessments of both automatic and reflective emotion systems. We assessed individuals' subjective (self-report), behavioral (facial expression), and physiological (cardiovascular) responses to emotional stimuli to determine indices of self-reported, behavioral, and physiological emotional fit. Self-report measures of emotion are thought to capture the reflective emotion system, and physiological arousal associated with an emotional response are believed to reflect the automatic system. Facial expressions likely represent a combination of both reflective and automatic processes given evidence for both universal and culturally variable components of facial expressions (Levenson et al., 2007).
---
The Present Study
The present study examines the associations between emotional fit and individual and collective aspects of well-being among a sample of East Asians/Asian Americans (henceforth, Asian Americans) and European Americans. Because we were interested in capturing representatives of two broad cultural groups whose traditional values regarding self and relationship are quite different, we employed stringent criteria that made use of behavioral markers of cultural orientation, family origin criteria, and self-identification to operationalize our cultural groups. These criteria are outlined in the methods and are meant to increase the likelihood that the cultural groups studied reflect the traditional norms and values associated with their respective cultural heritages, which include differential emphasis on social contexts in determining well-being.
In measuring the construct of emotional fit, we used a method from De Leersnyder et al. ( 2014) that considers the patterns of emotional experience in relation to those of the same cultural group. Here, we measured emotional fit objectively by taking the correlation between the individual's emotional pattern and the average pattern of the group (see the section "Materials and Methods" for details). Thus, rather than reflecting a subjective awareness of one's fit with one's cultural group, this conceptualization of emotional fit reflects an objective measure of normative emotional responding. While it is possible that subjective awareness of emotional fit may also provide valuable information about the relationship between emotional fit and well-being, the subjective measure of fit may be susceptible to demand characteristics. On the other hand, the objective measure of emotional fit allowed us to explore the direct link between normative emotional responding and well-being while separating the effect of demand characteristics (De Leersnyder et al., 2014).
To test our research question, we reanalyzed data originally collected as part of a large multi-method project investigating cultural difference in emotional reactivity and regulation. Results of the rest of the experiment are reported elsewhere (Soto et al., 2016). Although these data were not designed for the purposes of analyzing emotional fit, and therefore was largely a convenience data set, it did afford several opportunities to advance the emotional fit work and expand it in novel ways. This was an experimental study that collected self-report, behavioral (facial expression), and physiological responses to varying emotional stimuli, with participants being asked to regulate their emotional behavior (i.e., suppress or amplify) for a subset of the trials. Assessing various components of emotions in this study allowed us to explore emotional fit at multiple levels and in multiple ways. Thus, in the present study we examined emotional fit based on self-reported emotions (henceforth, self-report emotional fit) as well as emotional fit based on behavioral and physiological responses (behavioral emotional fit and physiological emotional fit, respectively). We were also able to look at emotional fit in different emotional response contexts (baseline emotional responding, in response to neutral stimuli, and in response to negative stimuli).
We tested two primary hypotheses in the present study. Based on previous studies supporting the relationship of individual well-being with self-report emotional fit (De Leersnyder et al., 2015) and with actualization of cultural norm (Oishi and Diener, 2001) across cultures, we hypothesized that self-report emotional fit would be associated with greater individual well-being (as indexed via increased life satisfaction and lower depression) in both Asian Americans and European Americans. In addition, we hypothesized that self-report emotional fit would be associated with more positive collective identity (as indexed via greater collective self-esteem and increased identification with group) based on previous evidence supporting this link (Livingstone et al., 2011;Páez et al., 2015). Importantly, we also predicted that culture would moderate this relationship, because in many East Asian cultures the self is construed in relation to others (Markus and Kitayama, 1991), and thus, being in alignment with others may have a greater impact on the collective identity of Asian Americans than European Americans. Thus, we expect that the positive association between self-report emotional fit and collective identity will be stronger in Asian Americans than in European Americans. In addition to testing these hypotheses, we conducted a series of exploratory analyses to test whether or not the hypothesized patterns of results for self-report emotional fit would replicate with the behavioral and physiological emotional fit indices.
Lastly, the design of the original experiment allowed us to investigate emotional fit across different emotional contexts. It is becoming increasingly important to recognize the contextualized nature of emotions (Scherer, 2009;Izard, 2010;Aldao, 2013). Emotion researchers have called for increased attention to the cultural and social context of emotions at the collective level in order to enhance our understanding of emotions as a whole (Goldenberg et al., 2017). This view also calls for the need to understand emotions in the context of particular emotional situations. This is because cultural differences in emotional experience occur in part as a function of varying situation selections across cultures (De Leersnyder et al., 2013). This means that findings from cultural investigation of emotions may vary depending on what emotional situation has been examined in the study. This highlights the importance of studying and understanding emotions in relation to particular emotional situations. Thus, in this study, we examined participants' emotional fit at three different experimental time points: prior to the introduction of any emotional stimuli (Time 0), in response to a neutral film (Time 1), and in response to a disgust-inducing film (Time 2). Previous studies on emotional fit examined mostly participants' broad emotional patterns in a particular environment (e.g., family or work settings; De Leersnyder et al., 2015). We thought that this approach would be most comparable to self-report emotional fit at baseline (Time 0) where participants were in the same setting, prior to presentation of any laboratory emotional stimulus. Thus, our primary hypotheses relating to self-report emotional fit and wellbeing are specific to measurement of emotional fit at Time 0. However, we also explored whether or not any of the findings observed at Time 0 are also seen at Times 1 and 2 when specific emotional stimuli are introduced.
---
MATERIALS AND METHODS
---
Participants
The final sample consisted of 127 undergraduate students recruited at a large university in the northeastern United States. Fifty two participants (29 females; 23 males) were identified as East Asians or Asian Americans (referred to as Asian Americans throughout the paper) and 75 participants (49 females; 25 males; 1 missing gender information) were identified as European Americans. Among the total of 127 participants, the age information was missing for 24 participants due to experimenter errors. The average age of the remaining 103 participants was 19.50 (SD = 2.86). A demographic screener survey was used to determine participant eligibility for both groups (see below). All participants were either recruited from introductory psychology classes and compensated with course credit or recruited from the general campus community and paid $18 for their participation. All procedure was approved by the university's institutional review board and conducted in accordance with the American Psychological Association's ethical standards.
---
Eligibility Criteria
We relied on several pieces of culturally relevant information, including behavioral information such as language preferences, to go beyond racial or ethnic self-identification to characterize our groups based on criteria employed in previous studies of culture and emotion [see Soto et al. (2005) and Soto and Levenson (2009) for full discussion of the rationale behind the criteria]. European Americans must have been born and raised in the United States and had to self-identify as White or European American. Participants also had to report that their parents and grandparents were born in the United States and identified as White or European American. In addition, European American participants had to report being of Christian or Catholic religion, or growing up with these religions being practiced in their households. Finally, participants had to report that over 50% of their friends while growing up and over 40% of their neighborhood while growing up were of European American background.
Asian American participants had to self-report their ethnicity as Asian or East Asian (e.g., Chinese, Korean, Japanese, and Vietnamese) and have been born either in an East Asian country or in the United States. South Asian participants from countries such as India, Pakistan, or Bangladesh were not eligible. In addition, participants' parents and grandparents also had to meet the same birth-country requirements. Furthermore, participants had to be conversant, though not fluent, in both English and in the Asian language of their culture of origin. There were no religious criteria for the Asian American participants. The criteria around childhood friends and neighborhood were also not applied to this group. While the original criteria were developed for participants living in a large metropolitan area where exposure to culturally similar others is common, this assumption would have been an unrealistic standard for the East Asian and Asian American participants in the community from which participants in the current study were sampled (University Park, PA, United States).
---
Procedure
Data used for the present study were collected as part of a large multi-method project investigating cultural differences in the experience and regulation of physiological, behavioral, and selfreported responses to emotional stimuli. Upon arriving at the lab room, participants signed the informed consent form and sat in a comfortable chair 3 feet away from a 19 LCD monitor. Participants completed a series of questionnaires including measures of emotion, depression, life satisfaction, collective selfesteem, the importance of their racial group membership to their identity (see below), and other measures outside of the scope of the present study. After this point, an experimenter of the same gender applied the physiological sensors to participants. Participants then watched a total of five film clips previously used in emotion regulation research (Gross and Levenson, 1993;Kunzmann et al., 2005) while their facial and physiological responses were collected. After each film, participants completed a self-report measure of emotion. All films were between 52 and 62 s in duration, with the exception of the first film, which lasted 22 s. Film 1 was the same across all participants and was a neutral film (seagulls flying over a beach). Films 2-4 were disgust films. The first disgust film (Film 2) always depicted an eye operation and was not associated with any specific emotion regulation instructions. The next two films were of a burn victim's skin graft and an arm amputation, and participants were asked to either amplify or suppress their emotional expression while viewing the films. The order of regulation instructions and the actual film presentation for films 3 and 4 were counterbalanced.
Film 5 was a slightly positive film (nature scenes) used to help participants recover from negative emotions induced by previous films [see Soto et al. (2016) for more detailed information about the methods and procedures].
The fact that this convenience dataset consisted only of neutral, relaxing, and disgust elicitors limited the scope of our emotional fit variable. However, given that disgust reactivity does not tend to vary greatly across cultures (Rozin et al., 2008), we also thought this would provide a more conservative test of our research question pertaining to cultural moderation. In addition, examining emotional fit in response to neutral stimuli may provide important information that has been hitherto unexamined given that neutral stimuli are often processed similarly as negative stimuli (Codispoti et al., 2001;Lee et al., 2008), especially so among clinical populations (Felmingham et al., 2003;Leppänen et al., 2004). Thus, responses to the neutral stimuli could reflect individual differences in responding that could lead to variability in emotional fit that may be meaningfully related to well-being outcomes.
The present study examined emotional fit at the first three time points prior to the introduction of emotional regulation instructions -emotional fit at baseline (Time 0), emotional fit in response to neutral film (Time 1), and emotional fit in response to the disgust film (Time 2). We did not include time points after emotion regulation instructions were presented because the impact of these instructions on emotional fit is outside of the scope of the present study. Because the collection of behavioral and physiological data began with the introduction of neutral film, baseline response (Time 0) consisted of the self-report measure of emotion only. Responses to neutral film (Time 1) and disgust film (Time 2) consisted of self-report, behavioral, and physiological responses.
---
Measures Satisfaction With Life Scale
Participants completed a five-item measure of life satisfaction. The SWLS assesses global judgments of satisfaction with one's life (SWLS; Diener et al., 1985). Participants are asked to rate their responses to questions such as "in most ways my life is close to my ideal" and "the conditions of my life are excellent, " using a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). Higher scores indicate greater satisfaction with life. The SWLS has shown good internal consistency in previous studies, with alpha coefficients ranging from 0.79 to 0.89 (Pavot and Diener, 1993). Cronbach's alpha coefficients in the current sample were 0.79 for Asian Americans and 0.84 for European Americans, indicating acceptable to good reliability.
---
Center for Epidemiologic Studies Depression Scale
The CES-D is a 20-item self-report inventory of depressive symptoms (CES-D; Radloff, 1977). Participants use a 4-point Likert scale (0 = rarely or none of the time to 3 = most or all of the time) to rate the degree to which they experienced, over the past week, major symptoms of depression including depressed mood, feelings of guilt and worthlessness, feelings of helplessness and hopelessness, psychomotor retardation, loss of appetite, and sleep disturbance. Higher scores indicate greater depressive symptoms.
The CES-D has shown good internal consistency with alpha coefficients ranging from 0.85 to 0.90 in previous studies (Radloff, 1977). In the current study, the CES-D also indicated good internal consistency with an alpha coefficient of 0.85 for both Asian Americans and European Americans.
Collective Self-Esteem Scale -Private Collective Self-Esteem and Importance to Identity Subscales
The 4-item private collective self-esteem and 4-item importance to identity subscales of the CSES were used to measure participants' positive collective identity and identification with their group (CSES; Luhtanen and Crocker, 1992). The private collective self-esteem refers to one's evaluation of how good one's ethnic group is. Importance to identity (henceforth, identity) assesses how important one's ethnic group is to one's self concept. The public collective self-esteem (one's perception of how others evaluate one's ethnic group) and membership esteem (one's perception of how good of a member one is for one's ethnic group) subscales were not included because they were less relevant to the focus of the present study. Participants use a 7point Likert Scale (1 = strongly disagree to 7 = strongly agree) to rate their collective self-esteem. Higher scores indicate greater collective self-esteem. The original validation study (Luhtanen and Crocker, 1992) reported alpha coefficients ranging from 0.73 to 0.85, indicating acceptable to good internal consistency. In the current sample, the private collective self-esteem subscale indicated acceptable internal consistency with alpha coefficients of 0.79 for Asian Americans and 0.72 for European Americans. The alpha coefficients for the identity subscale were 0.79 and 0.86 for Asian Americans and European Americans, respectively, indicating acceptable to good internal consistency.
---
Multidimensional Inventory of Black Identity -Centrality Subscale
To assess the degree to which participants identify with their ethnic group (referred to as racial centrality hereafter), we used the 8-item centrality subscale of the MIBI (MIBI; Sellers et al., 1997). The centrality subscale of the MIBI assesses a broader concept of group identification than the CSES identity subscale. In addition to assessing the degree to which ethnic group membership is central to one's core self-concept, the MIBI centrality scale also captures participants' sense of connection/belonging to other members of their ethnic group. Because the items in the original MIBI were developed for African Americans only, we modified the wording of items to accommodate other ethnic groups as well. Items include "overall, being of my racial group has very little to do with how I feel about myself " and "I have a strong sense of belonging to people of my racial group." This modification has been used previously with ethnic minority groups other than African Americans (Perez and Soto, 2011). Participants rated their response using a 7point Likert scale (1 = strongly disagree to 7 = strongly agree), and higher score indicated greater importance of racial group membership to their identity. The internal consistency of the centrality subscale of the MIBI was 0.77 in the original validation study, which indicates acceptable consistency (Sellers et al., 1997). The current sample also indicated acceptable consistency, with alpha coefficients of 0.79 and 0.77 for Asian Americans and European Americans, respectively.
---
Self-Reported Emotional Experience
At six different time points throughout the experiment (i.e., at the beginning of the experiment, and after each of five films), participants were asked to use a 9-point Likert scale (0 = none and 8 = the most in my life) to rate their current experience of 16 different emotions: interest, happiness, surprise, amusement, contentment, relief, anxiety, sadness, annoyance, disgust, embarrassment, boredom, fear, anger, contempt, and stress. This rating scale has been used to measure the experience of specific emotions in previous emotion research (Ekman et al., 1980;Soto et al., 2005).
---
Facial Emotional Expression
Participants' facial expressions during the presentation of films were video recorded and then coded into six discrete emotions (happiness, sadness, anger, surprise, fear, and disgust) using the commercial face reading software FaceReader v. 6.1 (Noldus, 2014). FaceReader objectively estimates the presence of emotion expressions by utilizing over 500 facial landmark cues typically present in emotion expressions as well as specific action units as defined by Paul Ekman's facial affect coding system. For each video frame (image) FaceReader supplies a "confidence score" between 0 and 1 representing the likelihood that each discrete emotion is present. FaceReader was trained on over 10,000 expert-coded images and has demonstrated high accuracy for emotion expression classification (Lewinski et al., 2014).
For the present study, we averaged confidence estimates for the presence of each emotion expression over the 1-min film presentation period. This resulted in six scores per film clip per participant representing the average likelihood that each of the emotions were present over the film's presentation.
---
Physiological Response
Electrocardiography (EKG) and skin conductance level (SCL) were recorded using a Mindware impedance cardiograph (MW2000) in conjunction with the Biopac MP150© device consisting of an eight-channel polygraph and a microcomputer. All physiological data were collected second-by-second using AcqKnowledge© software. EKG, which provides a measure of cardiac activity, was measured through three Biopac pre-gelled, self-adhering, disposable electrodes placed at three places on the torso: the right clavicle at the midclavicular line, just above the last bone of the ribcage at the left midaxillary line, and just below the last bone of the ribcage at the right midaxillary line. Cardiac impedance was collected with four self-adhering electrodesone placed at the suprasternal notch (jugular notch), one at the inferior end of the sternum (xiphoid process), and two on the back (one located roughly at the fourth cervical vertebra and one located roughly at the eighth thoracic vertebrae). MindWare Impedance Cardiography and MindWare HRV 2.51 software (MindWare Technologies Ltd., Gahanna, OH, United States) were used to clean raw data and extract the systolic time intervals (PEP, LVET) and heart rate variability (RSA) using spectral analysis. Clear artifacts in EKG data were deleted and excluded from analyses. In addition, SCL was measured using two disposable electrodes filled with isotonic recording gel that were placed on the middle phalange of the second and fourth fingers of the non-dominant hand. While indicators of both sympathetic (SNS) and parasympathetic nervous system (PNS) arousal can be obtained from analysis of physiological data, the present study focused on the pattern of SNS arousal. SNS indices include HR, cardiac output (CO), stroke volume (SV), left ventricular ejection time (LVET), cardiac impedance (Zo), preejection period (PEP), and SCL. HR is the number of contractions of the heart per minute. CO is a measure of the overall volume of blood being pumped by the heart per minute. SV represents the volume of blood ejected by the left ventricle of the heart in one beat. LVET is a measure of myocardial contractility. Zo is an indicator of blood flow through the thoracic cavity. PEP is an indicator of sympathetic myocardial drive and indicates the interval between onset of the EKG Q-wave and onset of the left ventricular ejection. SCL is an index of sweat gland activity at the surface of the skin.
---
Emotional Fit Indices
Following a calculation method used in previous studies of emotional fit with culture (De Leersnyder et al., 2014, 2015), three types of emotional fit with individuals' own culture (i.e., Asian American and European American) were calculated using self-report emotion ratings (self-report emotional fit), behavioral responses (behavioral emotional fit), and physiological responses (physiological emotional fit). The means and variances of all variables used to calculate emotional fit are presented in Table 1.
In order to calculate self-report emotional fit, we first calculated the group's average rating for each of the 16 different emotions excluding the respondents' own scores, which constituted the group's average emotional profile. We then correlated each individual's emotional profile consisting of 16 emotions to the group's average emotional profile. The derived correlation coefficients were Fisher's z-transformed in order to achieve a normal distribution of data. The final correlation coefficient for each individual served as self-report emotional fit score -the degree to which individual's emotional profile resembles the normative emotional profile of one's group. This process was repeated three times for each of the three time points (baseline, Films 1 and 2), resulting in three separate self-report emotional fit scores for Times 0, 1, and 2.
Behavioral emotional fit was calculated using the facial expression data. Six emotions used for behavioral emotional fit were happiness, sadness, anger, surprise, fear, and disgust. Following the same procedure as self-report emotional fit, the group's average behavioral emotional profile was derived from the group's average score on each of the six different emotions excluding the respondents' own scores. Then the group's emotional profile was correlated to each individual's emotional profile, and the Fisher's z-transformation was applied. This process was repeated two times, each using the responses to Films 1 and 2, resulting in two separate behavioral emotional fit scores for each individual in Times 1 and 2.
For calculating physiological emotional fit, we used seven different indices of sympathetic activation collected during the first two films. These were HR, CO, SV, LVET, Zo, PEP, and SCL. Among these, Zo and PEP decreases as SNS activity increases. Thus, Zo and PEP indices were reverse coded by multiplying them by -1, so that the increase in number would indicate greater SNS arousal. In addition, each of these indices were originally on different scales. Therefore, we standardized the scores using the formula: (x-x min )/(x max -x min ), which transformed the data into a 0-1 scale. The rest of the process of calculating emotional fit was identical to that of self-report and behavioral emotional fit. We first calculated the group's average scores for each of the seven sympathetic indices while excluding the respondents' own score and used it as the group's average emotional profile. This was correlated to individual's profile of physiological responses. The correlation coefficients were then Fisher's z-transformed. The process was repeated two times for each individual using the responses to Films 1 and 2, which resulted in two separate physiological emotional fit scores for each individual in Times 1 and 2.
---
RESULTS
---
Data-Analytic Approach
To test the link between participants' well-being and emotional fit and whether culture moderates this link, we conducted a series of multiple regression analyses. In these analyses, Emotional Fit variables were always entered as Step 1, followed by Culture in
Step 2, and the interaction between Emotional Fit and Culture in Step 3 to test for the hypothesized moderation of culture on the link between emotional fit and well-being. When significant interactions between emotional fit and culture emerged, the identified interaction effects were decomposed using a simple slopes analysis (Aiken et al., 1991). In addition, based on prior evidence suggesting gender differences in response to disgust (e.g., Schienle et al., 2005;Rohrmann et al., 2008), we examined the effects of gender on (a) the emotional responses to the disgust film and (b) our indices of emotional fit. Some gender differences emerged across specific facial expressions in response to disgust, and behavioral emotional fit also varied significantly by gender 1 1 We explored gender differences in self-reported, behavioral (facial expressions), and physiological responses to the disgust film. Self-reported emotions in response to the disgust film did not differ by gender, ps > 0.05. Similarly, there were no significant gender differences in facial expressions of disgust, anger, and fear in response to the disgust film, ps > 0.05. However, males showed more happiness expressions than females, t(55) = -2.35, p = 0.023, while females showed more expressions of surprise, t(89) = 2.91, p = 0.005, and sadness t(103) = 2.96, p = 0.004, relative to males. Looking at physiological responses, males showed greater SCL responses than females, t(81) = -2.44, p = 0.017, but there were no other significant gender differences across the remaining physiological indices, ps > 0.05. We also examined whether emotional fit differed by gender. There were no gender differences in self-report emotional fit at all three time points, as well as physiological emotional fit at the two available time points, ps > 0.05. However, there were significant gender differences in behavioral emotional fit indices at both Times 1 and 2, such that males showed greater behavioral emotional fit than did females, t(120) = -2.24, p = 0.027, t(118) = -2.78, p = 0.006, for Times 1 and 2, respectively. Given these gender differences in facial expressions in response to disgust film, as well as in behavioral emotional fit, we included gender as a covariate in the regression models testing the effect of behavioral emotional fit on outcome variables. This did not change any of the reported patterns of results, and therefore these analyses were not included in the manuscript given that examination of gender was outside of the scope of the present study. As a result, we re-ran our regression models controlling for gender, and this did not change any of our reported findings. Therefore, we report the models without gender for the sake of parsimony.
In reporting of the results, we focus on the main effect of emotional fit in Step 1 and interaction between emotional fit and culture in Step 3. Correlations between emotional fit and well-being variables and descriptive statistics are presented in Table 2. For our primary analyses (self-report emotional fit at time 0), we chose not to correct the alpha level (0.05) to preserve power and because we were testing a priori hypotheses (confirmatory analyses) and only conducted five regressions to test two questions (Rothman, 1990;Proschan and Waclawiw, 2000;van Belle, 2008;Rubin, 2017). For the exploratory analyses, we employed the Bonferroni correction given the large number of tests conducted. In all, we tested how three types of fit (selfreport, behavioral, and physiological) relate to two types of outcomes (individual well-being and collective aspects of wellbeing) using a total of 30 regressions relating to variations in the specific outcome variables and time points considered. Thus, the adjusted p-value of 0.002 (0.05/30) was used to re-evaluate any of the significant findings that emerged from analysis using an uncorrected p-value. We chose to present the results of the test both before and after the Bonferroni correction given the recommendation that corrections for multiple comparisons also has the drawback of reducing power (Rothman, 1990).
---
Self-Report Emotional Fit
We first examined the link between self-report emotional fit at Time 0 (EF T0-SR ) and individual well-being variables, and whether culture moderated this relationship. There was a significant main effect of EF T0-SR on depression, with higher emotional fit predicting reduced depression, β = -5.45, t(1, 125) = -3.91, p < 0.001. As predicted, the interaction Means with asterisks significantly differ between groups. * p < 0.05; * * * p < 0.001.
between EF T0-SR and culture on depression was not significant. Similarly, a significant main effect of EF T0-SR was found in predicting life satisfaction, such that higher emotional fit predicted greater life satisfaction, β = 3.29, t(1, 125) = 3.05, p = 0.003. As hypothesized, culture did not moderate this relationship either. Next, we tested the link between self-report emotional fit at the remaining time points and individual wellbeing variables. The results were largely consistent with Time 0 findings. There was a significant main effect of self-report emotional fit at Time 1 (EF T1-SR ) on depression, such that higher emotional fit predicted reduced depression, β = -4.26, t(1, 125) = -3.21, p = 0.002. There was a significant main effect of EF T1-SR on life satisfaction with higher emotional fit predicting greater life satisfaction, β = 2.50, t(1, 125) = 2.44, p = 0.016. The same pattern of results emerged with self-report emotional fit at Time 2 (EF T2-SR ). There were significant main effects of EF T2-SR on both depression and life satisfaction, β = -3.56, t(1, 124) = -2.19, p = 0.03, β = 2.75, t(1, 124) = 2.24, p = 0.027, respectively. After applying a Bonferroni correction to these exploratory analyses at Times 1 and 2, only the relationship between EF T1-SR and depression remained significant. Culture did not moderate any of the associations between self-report emotional fit at T1 and T2 and individual well-being.
Next, looking at the effects of emotional fit on collective aspects of well-being, there was a significant main effect of emotional fit at Time 0 on collective self-esteem (i.e., one's evaluation of how good one's ethnic group is) with higher emotional fit predicting greater collective self-esteem, β = 1.64, t(1, 125) = 2.43, p = 0.017. As hypothesized, this main effect was qualified by a significant interaction between EF T0-SR and culture, β = 2.79, t(3, 123) = 2.08, p = 0.04. A follow-up simple slopes analysis revealed that the simple slope of the regression of collective self-esteem onto EF T0-SR for Asian Americans was significant (simple slope = 3.05), t(123) = 3.20, p = 0.002, with higher EF T0-SR predicting greater collective self-esteem (Figure 1). In European Americans, the relationship between collective self-esteem and EF T0-SR was non-significant (simple slope = 0.27), t(123) = 0.28, p = 0.779. These findings were specific to Time 0 Emotional Fit. There were no significant main effects of EF T1-SR and EF T2-SR on collective self-esteem, and no cultural moderation was found at these additional time points. The effects of emotional fit on measures of how important one's ethnicity is to one's own self-concept (CSES identity and racial centrality) were non-significant across all three time points. That is, EF SR in Times 1, 2, and 3 did not predict either CSES identity or racial centrality, and there was no cultural moderation, all ps > 0.05.
---
Additional Indices of Emotional Fit
Next, we explored whether behavioral and physiological indices of emotional fit predicted individual and collective aspects of well-being. Both behavioral emotional fit at Time 1 (EF T1-BEH ) and Time 2 (EF T2-BEH ) did not predict any of the outcome variables, and there was no interaction between EF BEH and culture. Looking at physiological indices of emotional fit, there was no main effect of physiological emotional fit at Time 1 (EF T1-PHY ) on any of the outcome variables, and no cultural moderation was found. Similarly, there was no main effect of physiological emotional fit at Time 2 (EF T2-PHY ) on any of the outcome variables. However, there was a marginally significant interaction effect between EF T2-PHY and culture in predicting racial centrality, β = 4.03, t(3, 91) = 1.92, p = 0.058. A followup simple slopes analysis indicated that the simple slope of the regression of racial centrality onto EF T2-PHY for Asian Americans was significant (simple slope = 3.67), t(91) = 2.09, p = 0.04, with higher EF T2-PHY predicting greater racial centrality (Figure 2). In contrast, the simple slope was non-significant in European Americans (simple slope = -0.36), t(91) = -0.32, p = 0.753. This marginally significant interaction became nonsignificant when the Bonferroni corrected p-value was applied.
---
DISCUSSION
The present study examined the association between emotional fit and individual and collective aspects of well-being and the role of culture in this relationship. Emotional fit based on self-report ratings of emotions significantly predicted individual well-being including reduced depression and greater life satisfaction in both Asian Americans and European Americans. In contrast, selfreport emotional fit in the absence of laboratory stimuli predicted collective aspects of well-being, particularly collective self-esteem only in Asian Americans. In addition, emotional fit based on physiological response to a strong negative stimulus predicted greater identification with one's group only in Asian Americans, though this cultural moderation was only marginally significant in the initial test and disappeared when the Bonferroni correction was applied.
---
Self-Report Emotional Fit
Emotional fit based on self-reported emotions in all three timepoints was associated with individual well-being (i.e., lower depression and greater life satisfaction) across cultures. This finding is in line with the view that while there may be different cultural mandates for well-being in interdependent and independent cultures (e.g., social harmony in Japan and personal control in United States; Kitayama et al., 2010), being in alignment with one's own cultural norms around emotion is generally important for individual well-being across cultures. It has been shown that even though different emotions are preferred in Japan and the United States, the experience of culturally preferred emotions was associated with happiness in both cultures (Kitayama et al., 2006). In a similar vein, experiencing a culturally normative pattern of emotions has been found to be important for psychological well-being in both independent and interdependent cultures, although the specific contexts in which emotional fit becomes crucial varies depending on respective cultural values (De Leersnyder et al., 2015). Because people's emotions are shaped by how they perceive and appraise their environment (Ellsworth and Scherer, 2003), their fit with the average emotional pattern of others in the same culture may represent their level of sharing and participating in the predominant world-view of that culture. Thus, emotional fit to a certain extent may reflect a general level of social adjustment (De Leersnyder et al., 2011), which may have universal implications for one's psychological well-being.
While we have conceptualized the above relationship as one where emotional fit with one's group might lead to increased well-being, we can also consider the pathway in which individual well-being leads to increased emotional fit. For instance, the cultural norms hypothesis of depression (Chentsova-Dutton et al., 2007) suggests that the symptoms of depression (i.e., impaired concentration, low energy, and anhedonia) may impair individuals' abilities to attend to and enact cultural norms and ideals regarding emotion and emotional expression. Indeed, it has been demonstrated that depressed individuals showed lower emotional fit with their cultural group than did non-depressed individuals (Chentsova-Dutton et al., 2007). These findings demonstrate that perhaps individuals who have lower well-being and greater depression may have more difficulty responding in a culturally concordant manner. As such, more research is needed in order to establish the directionality of the relationship between emotional fit and well-being.
In contrast to the individual well-being findings, culture moderated the relationship between self-report emotional fit and collective identity, particularly, individuals' evaluation of their own cultural group (collective self-esteem). In Asian Americans, greater emotional fit predicted more positive evaluation of their own cultural group, whereas such a relationship was not present in European Americans. People generally experience similarity as safe and comforting, and similarity leads to greater liking (Montoya et al., 2008). This may be especially so in cultures where social harmony and conformity are greatly valued and practiced. Previous research has shown that people in collectivistic societies conform more than those in individualistic societies (Bond and Smith, 1996). It is possible that this greater importance of similarity in East Asian cultures leads to greater liking or more positive evaluation of the group that one also shares an emotional response pattern with. Alternatively, individuals may be more motivated to behave consistently with the group when they feel positively about their own cultural group. It is possible that we see this pattern only in Asian American individuals because conformity, in general, is practiced more in collectivistic than individualistic societies (Bond and Smith, 1996).
On the other hand, the inconsistency between one's own emotions and the modal emotional pattern of one's culture may be more self-threatening in interdependent culture. Negative evaluation of a group that is seen as dissimilar to oneself may represent an attempt to reconcile this threat to self by degrading dissimilar others and in turn preserving or enhancing the self. Alternatively, however, the experience of dissimilarity may lead to negative evaluation of both the individual and group in interdependent cultures. Extensive research on interdependent self-construal in interdependent cultures (e.g., Markus and Kitayama, 1991) suggests that there may be a greater overlap between individual and collective selves in Asian cultures. Although the evaluation of individual self (e.g., personal selfesteem) was not measured in the current study, it is possible that reduced fit with other Asian Americans led to more negative evaluations of the individual self, which in turn spilled over to the evaluation of their collective self.
In addition to the possible role of interdependence and collectivist values in the present findings, the role of Asian Americans' position as a racial minority group in the United States cannot be ignored. For instance, the status of a racial minority and the repeated experience of being marginalized may have led Asian Americans to seek belonging and to place a greater value on the group through which they can fulfill such a need. As such, Asian Americans who share emotional similarity to the members of their cultural group may be able to more readily satiate their need for belonging through their group membership, and in turn, evaluate their group more positively. Additionally, because a minority often experiences being perceived as representing one's broader minority group as a whole, Asian Americans may be more aware of and sensitive to how their individual behavior reflects on outside perceptions of their group as a whole. In the presence of this heightened sense of prescribed connection between their own behaviors and the outside perception of their group, Asian Americans may experience the group with which they share emotional similarity (i.e., greater emotional fit) less effortful to represent, and thus, leading to greater liking or more positive evaluation.
Interestingly, the results relating to self-report emotional fit and collective self-esteem were specific to emotional fit at baseline before any specific laboratory stimuli were presented. This could be because reflective responses to a strong emotional stimulus may override individual or cultural variability in emotional patterns, leading to too little variability in emotional fit indices, which in turn may limit the possibility of identifying any meaningful patterns between emotional fit and outcome measures. In fact, the variance in self-report emotional fit was lowest in Time 2 when the fit was measured in response to a strong negative stimulus. The pattern of results regarding individual well-being is somewhat consistent with this point as well. While the effect of self-report emotional fit on individual well-being was observed at all three time points, the magnitude of effect decreased from emotional fit at Time 0, to Time 1 (in response to neutral film), and to Time 2 (in response to disgust film), and some of the Times 1 and 2 effects were eliminated when employing the Bonferroni correction.
---
Additional Indices of Emotional Fit
Another aim of this study was to explore whether any of the effects found with self-report emotional fit is replicated with other indices of emotional fit such as behavioral and physiological emotional fit. We did not find the comparable patterns of results with other indices of emotional fit, which is consistent with the dual-process perspective suggesting that there is little response coherence between reflective and automatic emotion systems (Evers et al., 2014). In addition, indices of emotional fit at different levels were largely uncorrelated to each other, although emotional fit indices within the same level (e.g., self-report, physiology) were generally related to each other.
Behavioral emotional fit in response to both neutral and disgust films did not predict any individual and collective aspects of well-being. Similarly, physiological emotional fit in response to the neutral film did not predict any of the outcome variables. However, a marginally significant interaction pointed to a pattern consistent with our prediction such that higher physiological emotional fit in response to disgust film was associated with greater racial centrality in Asian Americans, whereas there was no such relationship in European Americans. In other words, the perceived level of group identification (racial centrality) was mirrored in greater individual-group synchrony in automatic responses to a strong emotional situation in Asian Americans. It is conceivable that when members of interdependent culture identify with their group, their collective identity gets deeply internalized to the point that this is reflected in a greater physiological concordance with their group members. This result, however, became non-significant after employing the Bonferroni correction. Given the small sample size, we believe this finding may nevertheless be worth testing in future studies, especially since we observed the similar pattern found in the primary analyses (emotional fit relating to collective aspects of well-being for Asian Americans only), although only in response to a strong negative stimulus (Time 2). Future studies aiming to measure physiological emotional fit may note that in the absence of a stimulus to respond (no stimuli or neutral stimuli) there may be too much variability/physiological noise across subjects to be able to calculate a meaningful fit index. However, the introduction of a punctate stimulus may organize the physiological system enough to be able to calculate the fit indices discussed. The variance in physiological emotional fit in Time 1 was considerably greater than that of Time 2, which further support this possibility. Thus, while these findings are not robust they are suggestive of a possible future direction to pursue when there is adequate power to test the hypothesis.
---
Limitations and Future Directions
The current study has a few important limitations that are worth noting. First, while we used data from previous study that allowed us to also explore behavioral and physiological emotional fit in addition to self-report emotional fit, we did not have behavioral and physiological emotional fit indices at Time 0. Thus, we cannot know whether our self-report emotional fit findings from Time 0 will be corroborated with behavioral and physiological emotional fit measured in the same context. In addition, the choice of emotion elicitors was restricted by the nature of convenience dataset. In particular, given that disgust may be an emotion with the least cultural variability, the use of the disgust film at Time 2 allowed for a more conservative test of our research question but also may have underestimated the impact of emotional fit. Future studies employing varying indices of emotional fit across diverse emotional contexts are needed for a more in-depth investigation into the effects of emotional fit. Second, our study is cross-sectional, and thus cannot answer questions regarding the directionality in the observed links between emotional fit and well-being. Additionally, the design of the current study does not allow us to explore the specific mechanisms underlying the relationship between emotional fit and well-being as well as the cultural moderation observed in predicting collective aspects of well-being. Important next steps would be to examine the causality in the link between emotional fit and well-being through a longitudinal design or a laboratory experiment where emotional fit is manipulated (e.g., Livingstone et al., 2011) and through what processes such causal effects emerge. Third, it will be important to replicate these results in East Asians residing in East Asian countries to disentangle the potential role of interdependence with that of being a minority experience in the current finding. Fourth, careful studies examining gender effects on emotional fit would also be a fruitful avenue of future research. Based on the observed gender differences in behavioral emotional fit, it may be worth examining gender-specific emotional fit (emotional fit calculated using a same-gender reference group) and how it relates to well-being. Lastly, prior studies examining emotional fit using the same profile correlation approach have used a relatively larger sample (e.g., N = 266 in Study 3 in De Leersnyder et al., 2015) compared to the current study. The relatively small size of the current sample, especially in regard to exploratory analyses with physiological emotional fit (Asian American n = 39, European American n = 56) may have limited our ability to detect significant relationships between primary variables of interest. Although this preliminary result is interesting, future studies using a larger sample should further examine this finding to draw more meaningful conclusions.
---
CONCLUSION
Individuals must constantly navigate through their social worlds while paying simultaneous attention to both their individual needs and behaviors and the needs and behaviors of those around them. However, the extent to which individual and group behaviors fit with each other can vary meaningfully across cultural groups as can the relationship between this fit and wellbeing. The present study revealed that emotional fit based on individuals' subjective emotional experience predicted individual well-being across cultures, but predicted collective self-esteem only in Asian Americans. Being the first study to examine the relationship between emotional fit and collective aspects of well-being, the current finding adds to the growing research attempting to understand emotions as social and interpersonal processes that are naturally imbedded in cultural contexts. We believe this underscores the need to consider, not only how emotions may conform to normative patterns in one's cultural milieu, but that this degree of fit may impact members of different cultures in different ways.
---
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of the American Psychological Association's ethical standards with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Penn State University's Institutional Review Board.
---
AUTHOR CONTRIBUTIONS
SC contributed to conception of the work, and collection, cleaning, analysis, and interpretation of data, and she was responsible for drafting and revising the manuscript. NVD contributed to conception of the work, cleaning of physiological data, and revising the manuscript. MM contributed to collection and cleaning of physiological and behavioral data. DA and RA contributed to the cleaning of behavioral data and revising the manuscript. JS supervised the project and contributed to all aspects of the work.
---
Conflict of Interest Statement:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. |
NIH SenNet consortium aims to dissect the heterogeneity of senescent cells (SnCs) and map their impact on the microenvironment at a single cell resolution and in the spatial tissue context, which requires the implementation of an array of omics technologies to comprehensively identify, characterize, and spatially profile SnCs across tissues in humans and mice. These technologies are broadly categorized into two groups -single cell omics and spatial mapping. To achieve single cell resolution and overcome the scarcity of SnCs, high-throughput single-cell and single-nucleus transcriptomic techniques have become a mainstay tool for surveying tens of thousands of cells to identify transcriptional signatures in rare cell populations, enabling discovery of potential new SnC biomarkers. Novel single cell mass spectrometry methods are developed for unbiased discovery of proteomic signatures of SnCs. A hallmark of SnCs is the senescence-associated secretory phenotype (SASP), which requires the use of proteomics, secretomics, metabolomics and lipidomics, especially SASP-associated extracellular vesicles, for comprehensive characterization of SAPS. High resolution molecular and cellular imaging of gene expression (e.g., MERFISH) or protein markers (e.g., CODEX) is critical for the study of SnCs in the large-scale tissue context. NGS-based spatial omics sequencing is poised to bridge the gap to realize both genome scale and cellular resolution in mapping SnCs in tissue. Novel technologies such as Seq-Scope and Pixel-Seq developed within SenNet further enabled subcellular resolution. SenNet investigators also developed spatially resolved epigenome and multi-omics sequencing techniques to link transcriptional or proteomic phenotype of SnCs to epigenetic mechanism. Further integration with high-resolution imaging makes spatial omics the crucial linchpin in connecting mechanistic underpinnings and molecular signatures with morphological features and spatial distribution. All these are critical for the construction of a map of SnCs and associated niches in the native tissue environment implicated in human health, aging, and disease, which is one of the main goals of the SenNet consortium. | BUILDING A HUMAN REFERENCE ATLAS Andreas Bueckle, Indiana University, Bloomington, Indiana, United States
The Human Reference Atlas (HRA, https://humanatlas. io) is a comprehensive, high-resolution, three-dimensional atlas of all the cells in the healthy human body. The HRA provides standard terminologies and data structures for describing specimens, biological structures, and spatial positions linked to existing ontologies. In this talk, we will present a high-level overview of the major components of the HRA--including 67 anatomically correct 3D Reference Objects for 29 organs and 31 Anatomical Structure, Cell Types, and Biomarker (ASCT+B) Tables--and the tools to explore, use, author, and review the HRA--including the Registration User Interface, the Exploration User Interface, the ASCT+B Reporter, and the HRA Organ Gallery in virtual reality. We welcome experts and practitioners to join the monthly WG meetings (sign up at https://iu.co1.qualtrics. com/jfe/form/SV_bpaBhIr8XfdiNRH), to explore and contribute to this effort, and to provide feedback on the evolving HRA from diverse perspectives.
---
SESSION 4205 (SYMPOSIUM)
Abstract citation ID: igad104.1557
---
FINDINGS FROM NSHAP: SOCIAL CONNECTEDNESS, HEALTH INDICATORS, MEDICATION EFFECTS, AND PREDICTING MORTALITY
Chair: Lissette Piedra Discussant: Amelia Karraker
The National Social Life, Health, and Aging Project's broad range of both social measures, and objective and selfreported health measures enable detailed analysis of the intersections between these fundamental aspects of older adults' lives. The papers in this symposium explore various aspects of these topics from different angles. The first explores employment as an important form of social participation, establishing that full-time employment among respondents is associated with better cognitive function, and less ADL and IADL difficulties. The second examines how social isolation affects men and women differently. Social Networks are the focus of the third paper and compare family to friendship ties. Using NSHAP's unique medication log, Wilder examines sleep disturbances and the prevalence of respondents taking medications with somnolence as an adverse event, demonstrating the need for more research into how this might affect older adults' health and well-being. Li uses NSHAP data to develop machine learning models to predict 10-year mortality of older adults in the US which perform with better accuracy than logistic regression.
Abstract citation ID: igad104.1558
---
EMPLOYMENT AS A FORM OF SOCIAL PARTICIPATION AMONG OLDER ADULTS: LINKS TO COGNITIVE AND FUNCTIONAL HEALTH
Peilin Yang 1 , Linda Waite 2 , and Ashwin Kotwal 3 , 1. University of Michigan Ann Arbor, Ann Arbor, Michigan, United States, 2. University of Chicago,Chicago,Illinois,United States,3. University of California San Francisco,San Francisco,California,United States Within the active aging literature, studies on social participation and health concur that people who are better socially integrated and engage in social activities tend to have better physical, mental, and cognitive health. This study revisits the literature by aiming to address three primary knowledge gaps in prior literature: 1) we explicitly examine the change within the 5-year-interval in cognition, activities of daily living (ADL), instrumental activities of daily living (IADL) associated with social participation five years prior; 2) we examine diversity in participation's association with not only cognitive function, but also ADL and IADL, which we lack knowledge of; 3) we conceptualize employment in later life as a kind of social participation, a part of older adults' lives that is overlooked in the social participation literature. We also examine whether the relationship between social participation and cognition, ADL, and IADL is the same for men and women, and for those employed and those not employed. The study finds that neighborhood participation at a high level indicates worse cognitive, ADL, and IADL outcomes 5 years later, and a higher level of neighborhood participation is more indicative of worse cognitive outcomes for men than for women. Full-time employment predicts better cognitive, ADL, and IADL outcomes 5 years later. We also find evidence that full-time work creates a stronger buffer against cognitive decline, developing ADL difficulties, and IADL difficulties, even among older adults who socialize with family and friends, participate in the community, and the neighborhood at a high level, respectively.
Abstract citation ID: igad104.1559 Self-neglect among older adults is characterized by inattention to hygiene and one's immediate living conditions, and may reflect unmet needs from social relationships. We therefore determined if social isolation was associated with self-neglect and how the association differed by gender. We used data from the National Social Life, Health, and Aging Project (NSHAP) Wave 3 (2015), a nationally-representative survey of 3,677 community-dwelling older adults. Social isolation was determined using a 12-item scale assessing household contacts, social network interaction, and community engagement. Self-neglect was assessed in-person and included 1) body neglect (lowest quintile of bodily self-presentation related to clothes and hygiene) and 2) household neglect (lowest quintile of household building condition, cleanliness, odor and clutter). Logistic regression was used to determine the adjusted probability of self-neglect by social isolation, and interaction terms with gender. Results indicated the association between social isolation and self-neglect differed by gender (p-values for interaction: body neglect: 0.02, household neglect: 0.20). Among women, social isolation was associated with a higher risk of body neglect (social isolation: 26% vs no isolation: 14%, p=0.001) and household neglect (23% vs 17%, p=0.05). For men, social isolation was not associated with body neglect (27% vs 23% p=0.2) or household neglect (23% vs 22%, p=0.8). In summary, social isolation was associated with body and household neglect among women, but was not associated with neglect among men. Future work should investigate mechanisms for gender differences and interventions to address or prevent self-neglect through enhancing social connectedness.
---
THE RELATIONSHIP OF SOCIAL ISOLATION TO SELF-NEGLECT AMONG OLDER ADULTS: RESULTS OF A NATIONAL SURVEY
Abstract citation ID: igad104.1560
---
LOCAL FAMILY AND FRIEND TIES AND THEIR RELATIONSHIP TO SOCIAL SUPPORT AND STRAIN AMONG OLDER ADULTS Won Choi, University of Chicago, Chicago, Illinois, United States
Family members and friends who live nearby are likely valuable sources of support for older adults. At the same time, local family and friend ties may also be a source of strain as spatial proximity to close ties can generate more intense interactions. Using data from Round 3 (2015-2016) of the National Social Life, Health, and Aging Project (NSHAP) (N=3,615), this study examines how local family and friend ties reported in older adults' social network roster are associated with instrumental and emotional support and social strain among community-dwelling older adults aged 50 and older. Results from ordered logistic regression models show that having a local friend tie is associated with higher levels of instrumental and emotional support from friends and lower levels of instrumental and emotional support from family. Having a local family tie, on the other hand, is associated with higher levels of instrumental support from family and lower levels of emotional support from friends. Having a local family tie is not related to emotional support from family or instrumental support from friends. Results also indicate that having a local friend tie increases the odds of reporting that friends make too many demands (i.e., higher friend strain) whereas having a local family tie is not a predictor of family strain. Together, results suggest that spatial proximity to friends and, to a lesser degree, family members are linked to how older adults experience social support and strain.
Abstract citation ID: igad104.1561
---
USE OF PRESCRIPTION MEDICATIONS WITH SOMNOLENCE AS A POTENTIAL ADVERSE EFFECT AMONG OLDER ADULTS IN THE UNITED STATES Jocelyn Wilder, NORC, Chicago, Illinois, United States
Over half of community-dwelling older adults experience sleep disorders, with approximately 40% reporting somnolence or/and excessive daytime sleepiness, associated with an increased risk for cognitive impairment and premature mortality. The use and concurrent use of prescription medication with somnolence as an adverse effect may be an overlooked contributor to this growing problem. This study aims |
by the initiators of contemporary cognitive science (McCarthy et al. 2006). According to Floridi, it is sufficient to have recourse to a counterfactual, which concerns human behaviour. In this sense, the problem of artificial intelligence is only that of making a machine act in ways that would be called intelligent if a human being behaved in the same way. Thus, there is no issue of comparison between human intelligence and machine intelligence. The only relevant issue is to perform a task successfully, such that the result is as good or better than human intelligence would be able to achieve. How this happens is not the central issue (although it may have important consequences); the outcome is. This approach to AI is called engineering or reproductive. It aims to reproduce the results or successful outcome of our intelligent behaviour by nonbiological means. In contrast, the cognitive or productive approach to AI aims to produce the nonbiological equivalent of our intelligence; that is, the source of the behaviour that the engineering approach aims to reproduce (cf. Floridi 2011a, b).A recent interpretation of AI developments proposes to consider AI as a form of acting that does not have to be intelligent to be successful (Floridi, 2013(Floridi, , 2022)). The basic idea is to return to how the problem of intelligence was framed Mirko Farina |
The reproductive approach has achieved astounding successes very quickly and promises to continue to advance exponentially (think about the development of mRNA vaccines against COVID-19 (Pizza et al. 2021), where being able -thanks to AI tools-to do reprogramming (as fast as possible and in a way that is as coordinated as possible) helped managing the deluge of data associated with the project1 . In so many areas, reproductive AI is better and tends to replace human intelligence because it is faster, more reliable, and more consistent in its results. The absolute reliability of AI in standardized tasks is probably the main difference with a human operator. While the latter may manifest greater degrees of freedom in task execution (something to do with the cognitive and productive aspects of AI), automated systems guarantee unambiguous, infinitely repeatable performance without fluctuations.
This is also a cultural feature that cannot be overlooked when considering the labour market and industrial production. Indeed, one can identify a characteristic sought by both the supply side and the demand side; namely the desire for a product that is "perfect" insofar as it is not limited or influenced by human "imperfections". The delegation to a "dumb"-as Floridi calls it-but exceptionally effective AI allows us to make our lives much easier and less tiring. AI as a reservoir of capabilities can therefore tackle any number of problems and tasks for which human intelligence characteristics of understanding, awareness, sensitivity, semantics, and meaning are not needed. And this happens, as proposed by Floridi (2013Floridi ( , 2014Floridi ( , 2022) ) among others, as the world adapts to reproductive AI and not vice versa.
Industrial automation follows this paradigm. The introduction of robots or devices that carry out the production and distribution processes with reduced human intervention or diminishing participation is done by circumscribing the work environment to the limited capabilities of simple machines. We don't try to build a humanoid robot to wash clothes in a bathtub but build a microenvironment (such as a washing machine) that takes advantage of available technology. The same happens with automated ironing. This changes not only the way people work towards the realization of these activities, but also the products for which the services are designed. We are talking here about technologies that are not cutting-edge, where AI plays a limited role.
Consider, however, other procedures, such as house cleaning. Robot vacuum cleaners take advantage of AI to move with increase effectiveness in complex environments. However, it is clear that it will soon be the design of homes that will adapt to automated service systems, especially with the needs of the elderly in mind, if robotic assistants become more prevalent for lonely people.
The self-driving car may be one example among the hightech ones, where engineering AI is the absolute protagonist (Bonnefon 2021). The self-driving car does not start out as a classic self-driving car that is adaptable to different road locations and can, if need be, travel on unpaved terrain or in adverse environmental conditions, such as a blackout of lighting and electronic signage. The self-driving car comes with specific requirements due to AI technology that allows the vehicle to move without a human driver. It must move in an environment that allows it to have all the feedback necessary for the efficient execution of its task, which is to move from point A to point B with maximum safety and comfort of the passengers and all who may be in its path. This can be accomplished by engineering the roads, making them suitable for the self-driving car (Birdsall 2014). It is not the car that has to adapt to the environment, but it is the environment that is wrapped-around a tool that we find particularly useful in terms of saving effort, time, and traffic accidents (Borenstein et al. 2019). Paradoxically, at an early stage, self-driving cars will have a narrow range of available destinations and thus condition the mobility of those who want to rely on them. For instance, robotaxis can only circulate on a few streets in San Francisco (cf. Heaven 2022) or in very small cities (such as Innopolis).
In general, wrapping the environment in an infosphere has become an increasingly common practice to exploit the potential of AI, where "the infosphere is the whole system of services and documents, encoded in any semiotic and physical media, whose contents include any sort of data, information and knowledge (…) with no limitations either in size, typology, or logical structure. Hence it ranges from alphanumeric texts (i.e., texts, including letters, numbers, and diacritic symbols) and multimedia products to statistical data, from films and hypertexts to whole text-banks and collections of pictures, from mathematical formulae to sounds and videoclips" (Floridi 1999).
Connected to the infosphere is the onlife dimension, i.e., the activity that everyone performs while being connected to digital devices, which are also embedded in the wrapping-around logic we referred to above. Environments are changing, so that artificial agents-robots, bots, algorithmscan move with greater ease than humans can now do. In highly digitally wrapped environments, all relevant data are collected (or at least potentially collected) and analysed without the need for other interventions. Thus, decisions and actions can be made automatically by applications and actuators.
In this context consider the process of datafication, which is illustrative of many of the ideas discussed above. Datafication, according to Mayer Schoenberger and Cukier 1 3 (2013a, b) is the transformation of social action into online quantified data; a procedure that allows for real-time tracking and predictive analysis of consumers' behaviors. Simply stated, datafication is all about accessing -with the help of AI tools -previously inaccessible processes or activities and turning them into data, that can be subsequently monitored, tracked, analyzed and optimized, or even sold (Cukier and Mayer-Schoenberger 2013). To be sure, the exploitation of Big Data can unlock significant value in areas such as decision making, customer experience, market demand predictions, product and market development and operational efficiency (Yin and Kaynac 2015) and many of the technologies we use in our daily life have enabled different ways of 'datafying' our basic activities and behaviors (Da Bormida 2021).
Social networks (such as Facebook or Instagram) notoriously collect and monitor data information to market products and services with the intent to produce recommendations to potential buyers (Chamorro-Premuzic et al. 2017). Yet, datafication is a much more pervasive phenomenon than the naïve eye may prima facie meet, as it is actively pursued (with different goals and aims) by many industries (Pybus and Coté 2021), for example:
• by insurance companies, where the data gathered is used to update risk profile development and business models; • by banks, to establish the trustworthiness of a certain individual requiring -for example-a loan; • by human resources and hiring managers at various level, which use datafication to identify risk-taking profiles or even to spot potential personality issues; • by governments and institutions, where datafication and digitalization are often pursued with the intent of minimizing bureaucracy and optimizing transparency in both decision making and resource allocation; • (in general), by investors worldwide to boost business opportunities, credentials, and productivity. For example, very successful companies (such as Netflix, Amazon, Uber, Fitbit) typically merge the resourcefulness of big data with the power of AI to offer their users products that are smart and reliable.
In short, one can argue that datafication -especially if pursued in an infosphere-can make our lives smoother and -in doing so-fundamentally change our societies, how people interact between each other and with their institutions, and probably even transform people's understanding of the concept of community as a whole (Skenderija, 2008).
Nevertheless, in the face of these positive effects any data-driven endeavor that takes place in an infosphere must also be considered (and therefore properly assessed) against the backdrop of complex and multidimensional issues or challenges that it may contribute to form, concerning -for instance-decision-making processes, social solidarity, privacy, security, the management of public goods, of civil liberties, or even sovereignty (Da Bormida 2021).
For example, in the health care sector, concerns about the datafication of the infosphere relate to the difficulty of respecting ethical boundaries relating to sensitive data (e.g., Ruckenstein and Schüll 2017). Datafication, it has been argued, has the potential to erode goal orientation and the room for professional judgement (Hoeyer and Wadmann 2020), favoring varieties of neoliberal subjectification (Fotopoulou and O'Riordan 2016;Foucault 1991) in the form of tools that may accelerate the withdrawal of the welfare state from citizens' lives, which can eventually turn health care into self-care (Ajana 2017).
In the education sector, the major risk involved is that students may feel constantly under 'liquid surveillance' (Bauman and Lyon 2013;Zuboff 2019), due to the continuous collection and processing of their data on all levels of their learning trajectory in the educational system (from the classroom to the school, from the region to the state and internationally [Jarke and Breiter 2019]). This -it has been observed-can potentially lead to a reduction of their creativity and/or in higher levels of stress (Williamson et al. 2020).
Thus, while wrapping up environments to harness the potential of AI represents a good way to improve humans' condition, the future of our lives is (and will be even more) marked by datafication, which may actively modify our environments in the attempt to achieve more effectiveness and efficiency. The modification of work processes pursued within the infosphere of an increasingly datafied society has several consequences. While many researchers have investigated the consequences of datafication in separate fields (e.g., Da Bormida 2021), not much work has been done -so far-to bring all these insights together in one research paper. This is what we propose to do in our contribution.
Specifically, we show that datafication in a rich infosphere may determine that: (a) the full protection of privacy may become structurally impossible, thus leading to undesirable forms of political and social control; (b) worker's degrees of freedom and security may be reduced; (c) creativity, imagination, and even divergence from AI logic might be channeled and possibly discouraged; (d) there will likely be a push towards efficiency and instrumental reason, which will become preeminent in production lines as well as in society. All this encourages reflections on the ways in which digital technologies may foster or hinder decisionmaking processes in future societies and on how increasingly automatized algorithms, based on machine learning, may gradually take over certain roles that were previously uniquely attributed to humans. algorithms (Christophersen et al. 2015). As Kennedy et al. (2015, p. 1) brilliantly put it: 'the advent of big data brings with it new and opaque regimes of population management, control, discrimination and exclusion', something very much akin to what Foucault (1997) called biopolitics; a pervasive mode of power that attempts to understand, control, influence and even regulate the vital characteristics of any given population (Farina and Lavazza 2021b). In agreement with Lupton (2016), we believe that we are now entering an era in which biopolitics may be enforced through datafication; that is, through the joint combination of extensive datasets of digital information gathered synchronously across multiple domains. All this raises crucial issues surrounding the privacy of individuals as well as their basic civil liberties (such as freedom of movement and freedom of association) that are now -it seems to us-more than ever under threat (Farina and Lavazza 2021b;Pietrini et al. 2022;Lavazza & Farina 2021).
Consider the following example as a paradigmatic illustration of this claim. It involves the collection of biometric data through face recognition algorithms based on machine learning (Gray and Henderson 2017;Ball et al. 2012). This is just an example of a more general trend (involving the application of biometrics in society). We note that the rolling out of this technology is taking place as we write this paper in many countries, especially in those in which there is a rich infosphere that supports widespread technological advancements (such as the development of 5G).
Biometrics can be defined as 'the science of automatic identification or identity verification of individuals using [unique] physiological or behavioral characteristics" (Vacca 2007, p.589). Roughly speaking, biometric systems can be divided into two main categories: hard biometrics and soft biometrics. Hard biometrics include traditional biometric identifiers (such as faces, iris scans, DNA markers, and fingerprints) that are normally used for identity verification technologies (Benziane and Benyettou 2011). Soft biometrics are instead parameters (such as gender, ethnicity, age, height, weight, voice accent, birthmarks etc) that can complement hard biometrics and be used to increase the precision or the accuracy of the recognition system (Nixon et al. 2015). Soft biometrics typically provides information about a person, without -on its own-necessarily providing sufficient evidence to precisely determine the identity of that person.
The process of biometric identification is quite complicated and can be summarized in four basic steps (Hu 2017), which include: (1) Enrollment (biometrics data are gathered from the individual); ( 2) Recognition (a template of the individual's identity is created on an artificial system for monitoring purposes); (3) Comparison (future biometrics data are gathered from individuals); and (4) Decision (a This does not necessarily mean that the development of AI to improve working conditions should be resisted; rather, we should reflect on how to better organise the process to achieve social and moral good. The first concern of ethics in the face of the advance of AI is with workers and their condition. The goal is therefore to identify the risks that individuals and society at large may face and to find regulatory remedies to those risks. In the next four sections, we will look at areas where the spread of AI in workplaces and processes may require conceptual clarification and both ethical and legislative regulation.
---
Privacy Issues
Several researchers working on datafication (e.g., Van Dijck 2014) argue that surveillance is 'too optically freighted and centrally organized a phenomenon to adequately characterize the networked, continuous tracking of digital information processing and algorithmic analysis' (Ruckenstein and Schüll 2017, p.264), that occurs in the world in which we nowadays live. On these grounds, such researchers propose to replace the term 'surveillance' with the term 'dataveillance' (Gitelman 2013;Ruppert 2011), by which they mean that the act of surveillance in today's world does not take place directly from the above, but rather becomes distributed across multiple parties and several domains (covering much of our activities and potentially spanning from business to education, from medicine to justice, from governance to management).
These researchers (e.g., McQuillan 2016) also notice a different telos (or end goal) between surveillance and dataveillance. Where the end goal of surveillance might be defined as the ability to constantly 'see' something or someone; the telos of dataveillance is rather concerned with the capability of continuously tracking information across multiple domains to capture emergent patterns capable of predicting people's behaviors (not only of observing it). Yet, algorithms and tracking AI tools are not only used to detect and predict one's behavior but also to shape and actively modify it (Beer 2009;Mackenzie 2005).
For example, the data that users generate might be gathered and processed to give a digital feedback capable of indirectly modulating and orienting someone's action, in a way that subtly departs from direct panoptic forms of discipline but could be argued to be even more effective. An illustration of this claim is the growing usage of wellness programs in corporate settings (Till 2017). Such programs typically encourage employees-through incentives or rather penalties-to engage in self-tracking activities, with the intent of gathering data that employers (in various forms and at various levels) can then analyze, by using proprietary 1 3
as an ideal trait to use for automated biometric recognition. Face recognition systems typically utilize the spatial relationship among the locations of facial features (such as eyes, nose, lips, chin, and the global appearance of a face [Jain 2007]) in conjunction with rapidly developing artificial intelligence (AI) technologies, to provide information that can be used for security and law enforcement purposes. See Ali et al. (2021) and Boutros et al. (2022) for surveys of recent face recognition technologies.
For example, western countries (such as United Kingdom, United States, and Australia), being at the forefront of the development of comprehensive surveillance systems, increasingly use such technologies for security purposes (without getting into unnecessary technicalities, anyone walking around London can easily get a feeling of that).
The expanding use of this technology therefore raises pressing ethical and social concerns regarding its adoption in society. 'Central to the ethical, legal and policy issues is the tension that exists between the legitimate collection of biometric information for law enforcement, national/ international security, and government service provision, on the one hand; and the rights to privacy and autonomy for individuals on the other' (Smith and Miller 2022, p.168). Descending from this point there are also issues concerning potential violations of individuals' privacy in search of wrongdoings that can lead to imbalance between a state and its citizenry and that need to be carefully evaluated.
In modern societies, it is normally agreed that the state has no right to engage in selective monitoring of any citizen, unless that citizen raised strong suspicions of unlawful behaviors. Yet, the development of facial identification technology invites the active monitoring and even the full-scale mapping of law-abiding citizens; in essence, the pervasive wrapping of technology around innocent civilians, which may contribute to undermine the basic universal right of not being investigated selectively (Gstrein and Beaulieu 2022).
Of course, face recognition technology is also used for good things. For example, it is widely deployed in airports, where it has contributed to speed up the processing on incoming passengers by customs authorities. Legislation to facilitate the usage of facial recognition programs capable of integrating pictures from passports and various forms of IDs (such as drivers licenses) into a national database, which can then be consulted by law enforcement and other government agencies are being introduced in several countries across the globe; however, the average reader is probably less aware that such technology is also being actively rolled out in many countries, especially in connection with the development of 5G networks.
5G networks, which possess extremely high computational power combined with the huge storage capability of modern clouds (we are talking about zetta possibly match is found or not found among the data collected based on specific algorithms that cross-check all biometrics data obtained on the individual). We note that biometric database screening technology is increasingly employed in this fourth step, as it is believed to remove the human element from the matching process, thereby maximizing objectivity and efficacy in decision-making (Ellerbrok 2011).
Biometric technology is also increasingly considered as an effective tool for dealing with security matters (such as terrorism prevention). Because of this, the last decade has seen a very rapid development of biometric technologies (Alsaadi 2021). 'Biometric dataveillance programs', as we may call them, are proliferating under preemptive strategies to combatting crime and terrorism and to ensure homeland as well as international security. We shall note that the U.S. Department of Defense (DoD) has called such approaches -perhaps in a Freudian slip -'population management'2 , which suggests that their potential applications may well stretch -to put it mildly-to much wider realms, quite possibly along the lines envisaged by Foucault (1997) 3 .
Anyhow, major recent trends in biometrics typically focus on individuating behavioral kind or towards the development of 'multimodal biometrics' (Ryu et al. 2021), a procedure which involve the combination of sensor and computing capabilities endowed with enhanced connectivity with the intent to apply such technologies in a broad variety of sectors and for a broad variety of purposes, far beyond law enforcement or prevention of crimes (Hu 2017). For example, latest breakthroughs in the field include the development of sensors that can capture new types of bio-signals (such as heart beats and brain waves via -for instance-EEG or ECG), or brain-computing-interfaces (BCI).
Such interfaces are reported to be able to measure neuro activity and translate it into machine-readable inputs (Anumanchipalli et al. 2019), which suggests that these devices could -in the future-allow for the detection of thoughts, possibly opening to the possibility of influencing operations of the human brain. We won't focus on such technologies on this paper, as they are mostly covered by state secrets (and are currently under development); however, we would like to spend the remainder of this section on analyzing the case of face recognition technology through machine learning algorithms, which is equally significant and perhaps is the one that poses -at this stage at least, especially given its widespread application in society-the most significant ethical and social challenges.
Humans are very good at recognizing fellows based on facial appearance. Naturally then, face can be considered
---
Freedom Issues
Increasingly datafied working environments are geared toward efficiency and, therefore, in general terms toward reducing worker discretion. In this sense, a certain loss of worker's 'freedom' is inherent -and perhaps even acceptable -in any process, not only of automation but also of standardization and compartmentalization of resources and procedures. The shift from the craftsman performing the whole process of pin production to the division of labour among workers performing different tasks was famously described by Adam Smith in the 18th century. We should now consider the peculiarity of working in an environment that extensively relies on datafication and is richly wired and interconnected (infospheric) across multiple domains and dimensions.
In such an environment, the human being must be a facilitator of processes that automated systems are not yet able to do or will never be able to do. For example, in warehouses this happens with the substantial homologation of workers to the procedures, rhythms, and forms of control and evaluation that have been introduced for processes carried out entirely by industrial robots (Delfanti 2021; Engstrom and Jebari 2022). It is not intended here to make a social and political critique of this kind of evolution of the work environment decoupled from technical considerations referring to productivity gains that translate into concrete benefits for consumers in terms of product availability and low costs. In our societies, all workers are also consumers, and this cannot be underestimated.
However, the more we make the working environment wrapped around robots, the more the risk grows that even human employees will be totally absorbed in this new production procedure, which may have strong repercussions for workers. This could lead to new forms of exploitation, as some are afraid of. Yet, it is not necessarily the case that this will happen. In any circumstance though, the logic of quantification and automation entails a modification of the worker's spaces of freedom. Indeed, it should be emphasized that two of the basic criteria of AI-based approaches are predictability and certainty. These criteria are structurally opposed to the classical idea of freedom, which is understood as the possibility of choosing from time to time between alternative courses of action based on reason (Lavazza and Inglese 2015).
There are several areas in which workers' freedom might be diminished because of the widespread implementation of AI tools and datafication in society. Personnel selection is one of such areas, where the hiring process is progressively being managed by algorithms capable of evaluating candidates in accordance to predefined criteria, which are set against the perceived compatibility of a subject for a yottabytes of images and even videos), represent the ideal companion for this facial recognition technology in as much as they allow to fully exploit its potentials in richly datafied and infospheric environments. In brief, current face recognition technologies allow to store huge amounts of personal data coming from multiple domains and timespans, to reliably access them at will at any point in time, with fast algorithms specifically designed to selectively checking all the information gathered for 'desired' purposes.
Yet, facial recognition programs are -to date, at least-quite vulnerable to deepfake-based attacks (see Ramachandra and Busch 2017, for a helpful review), for example-with static facial images4 , which raise concerns about the security as well as the effective trustability of those data. In addition, facial recognition technologies might be combined with AI tools preprogrammed for spotting specific emotions (e.g., anger) to target minorities (e.g., prone to rebellion) based on ethnicity (so on automated analyses of morphological traits); hence, they could be massively deployed to discriminate and even oppress -given the pervasivity of such systems in modern infospheric societies-certain strata of any given population (those that -for instance-do not adhere to a state religion due to different cultural backgrounds).
Furthermore, given the storage capabilities of modern clouds, which are set to increase dramatically over the next decades, who could guarantee that the biometric data stored in archives now, through the extensive process of datafication, wouldn't become compromising -say-30 years from now, when certain moral values or virtuous might have changed, partly or entirely? Who could then assure that lawabiding citizens couldn't be prosecuted in 30 or 40 years for behaviors, words, or actions that are completely acceptable now but may not be deemed as 'convenient' in the future if a track record of their actions associated with their morphological traits is permanently stored (and readily accessible) somewhere? Given current trends on cancel culture and the corresponding emergence of 'dataveillance', this possibility shouldn't be too hastily ruled out.
These are very crucial issues underlying the usage of facial recognition in biometrics mapping that promise to bear a significant ethical and legal impact on the future of our societies. Having briefly reviewed them, we now look at another application of datafication in rich infospheric environs, which include industrial automation.
Another consequence for the workers within the wrapped datafied/infospheric environment in which we increasingly live could be the progressive loss of the freedom to change the rules that govern the environment itself. This is a discretionary activity that does not violate quality standards but allows for changes and improvements in the production process, both technically and in terms of working relationships and conditions. For example, introducing a moment of confrontation between workers can improve both productivity and employee motivation. If, however, the procedures do not allow this, any momentary slowdown in the process will be evaluated negatively, even though it may yield better results in the long run. An efficiency-bound environment that monitors all processes in real time and intervenes to make them homogeneous and smooth cannot tolerate unanticipated deviations and tends to discourage or suppress them.
In this vein, it can be considered another form of freedom: the idea of self-government (Pettit 2011). This latter entails an overall ability to do and not to be governed by alien forces, and the self-mastery ability that sustains the full optionality (or the freedom to do otherwise as specified so far). These two accounts of freedom are logically separated and can vary independently of one another. Now suppose a situation in which we have a high level of optionality, but an environment in which there is a predominance of heteronomy (freedom in the self-governing sense is not respected). In this situation options would be left open to agents, but the agents would not be free by simply having a set of options open, since the algorithms would be in a position to filter the "choice environment" (Danaher 2019).
Thus, in wrapped datafied/infospheric working environments there may be cases with high optionality but with low autonomy. For example, soft control mechanisms over workers' routine, including persuasive pop-ups ads or nudging techniques, have been employed by Uber for steering drivers to have diverse booking options and more flexibility (Scheiber 2017;Webster 2020). Some have noted that this can lead to power asymmetries and structure control over workers.
This is not only the case in strictly structured fields of work such as logistics, but also in fields where AI is only now appearing: such as medical diagnoses, marketing, or the entertainment industry. In all these cases, workers' freedom in decision-making might be reduced in parallel with the possibility to exercise their creativity, as we shall see in the next section.
If in many areas, the human contribution cannot be (yet) dispensed with, one issue related to the progressive depersonalisation of the worker within a datafied environment is that of the loss of the possibility of cultivating and exercising those characteristics in humans that have been shaped specific task. There is already a rather large literature on the possible biases introduced by such programs (Tippins at al., 2021;Goretzko and Israel 2021). These biases depend on how the programs were designed and on the type of data on which they were fed and trained. Typical examples of bias introduced by personnel selection programs trained on time series or previous informal criteria adopted by companies involve decisions unfavourable to women, ethnic minorities, or social groups that have been historically disadvantaged or excluded.
To be sure, discrimination in the workplace has always existed, and power relations between firms and individual workers have always been highly unbalanced and asymmetrical. A new sensibility in recent decades, however, has brought new attention to the issue and has made it possible to reduce systematic bias in selection and various types of abuses (Woods et al. 2020). Yet, the introduction of algorithms that are considered more efficient and unbiased may -if not properly supervised-risk introducing the very same systematic discrimination that we strove to fight in recent decades (Farina et al. 2022a, b;Bakare et al. 2022;Bugayenko et al. 2023). In addition, this new sort of potential systemic discrimination may be far less detectable (as based on mathematical data, which are difficult to interpret for the non-experts) than the one which has historically affected less advantaged groups. Contributing to this trend will be the growing need to adapt all procedures related to the entire production processes to the automation typical of increasingly AI-managed work environments.
In this sense, control, surveillance and the system of incentives and sanctions (as discussed in Sect. 2 above) will also have to conform to quantification and datafication. The worker's margins of freedom will then likely be reduced as result of the need to conform to strictly quantitative criteria in their actions and in light of the need to be evaluated with tools that prioritize objectivity and efficiency. Ironically, such algorithms are already actively used in the criminal justice system of certain countries (Custers et al., 2022). Indeed, it is hard to see why we could rely on programs that assess the appropriate sentence for an offender, or the possibility of recidivism of the same, after a certain period of imprisonment and not do so for labour disputes.
Another issue relevant to the economic field and to the workers' freedom concerns the possibility of being evaluated and judged by peers and not by AI algorithms (Ernst and Young. 2018;Keystone Consulting, 2017). It is generally agreed that there is a duty of dignity to be accorded to human beings, who should be treated as unique individuals defined by personal traits, and not as a set of data unified by the attribution to a first and last name. This duty of dignity seems to be threatened by the widespread adoption of such technologies.
1 3 text) on most of Dennett's corpus, with the aim of seeing whether the resulting program could answer philosophical questions similarly to how Dennett himself would. The result was that philosophy experts were unable to clearly discriminate between the answers given by Dennett and answers given by GPT-38 .
The two examples we discussed above are just paradigmatic instances of an ongoing revolution focussing on the "creative" possibilities of AI (Miller 2019). The topic of creativity and its definition is one of the most complicated in the field of psychology, but it has to do with the ability to produce something that is new (original and unexpected) and useful (appropriate to the performance of a task) (Sternberg and Lubart, 1999, p. 3). In other words, what is creative is the result of a process that is not necessarily reducible to the mechanics of deterministic reasoning. Usually within creative acts, one cannot identify a precise concatenation of stages but rather perceives holistically the emergence of the result (Koestler, 1964). In contrast, as far as AI creativity is concerned, the operation of the algorithm is potentially "transparent"; that is reducible to a finite number of steps, and its success relates either to the direct liking of a human viewer (as in the case of the art contest mentioned earlier) or in its appropriateness (as in the case of the Dennett-like responses of GPT-3; or other reproductive applications of AI, which thanks to huge databanks and vastly superior computational power to humans can produce a very large number of solutions to a problem, among which to find the most appropriate one).
One may wonder whether we will end up delegating all creative tasks to algorithms, especially in wrapped and datadriven environments, where AI can deploy its engineering capability to the fullest degree. And -if this is the goalwhether human creativity at work will be less and less used. Is this a likely scenario? And what consequences might it entail? Firstly, one may ask whether low-cost, AI-produced creativity is sufficient to meet the needs of consumers (of goods and cultural products) and the resolution of problems that may arise from time to time. Today's computers are composing music that sounds "more Bach than Bach," turning photographs into paintings in the style of Van Gogh's Starry Night, and even writing screenplays (Miller 2019). The key point; however, seems to be this: every relevant problem that is more than just a procedural query has to do with humans and their complexity. For example, there is a need to not only save energy and reduce climate-altering emissions, but there is a necessity to do this in tune with the desires and goals of people living in that specific area with a specific culture and specific values.
by natural evolution and that AI tends to counteract or suppress (Malinetsky and Smolin 2021). For example, sociality and relationships; the ability to frequent natural and not just artificial environments; other activities including those oriented to a relevant, concrete, and visible purpose.
These aspects are related to physical and mental wellbeing, which go beyond the immediate gains that the new AI-based economy may bring about in terms of physical security, education, income, or general wealth (even assuming an optimistic scenario, on which many don't necessarily agree). Humans are proactive creatures who deeply fear loneliness, boredom, and feelings of worthlessness. In general, the sense of agency and being held accountable for their actions is something that underlies freedom as a value, as a property that gives meaning to existence from a phenomenological point of view (Farina et al. 2022a).
---
Creativity
Recently, an American artist won the first place in the emerging artist division's "digital arts/digitally-manipulated photography" category at the Colorado State Fair Fine Arts Competition 5 . His winning image, titled "Théâtre D'opéra Spatial," was made with Midjourney 6 -an artificial intelligence system that can produce detailed images when fed written prompts. The affair caused controversy because the (human) jury evaluated the work without considering that it was produced with an AI system; even though the artist openly declared that he had used an AI tool to generate the image upon submitting his work. After the artist got the prize, he was inundated with criticism from numerous colleagues, who deemed it inappropriate to compete with a work made that way. It's like admitting robots to the Olympics, was one of the comments.
There are numerous programs that allow people to create images based on verbal instructions (such as DALL-E 2 7 ). Such programs draw on vast image repositories and modify or mix pre-existing figures based on users' inputs. Until now, they were considered curious pastimes, but their entry into competitions and the art market could revolutionise the criteria of creativity, the way it is evaluated, and the role of human beings in contributing to society's creative processes.
Another experiment sparked discussion in early 2022. Two scholars have, with Daniel Dennett's permission and cooperation, "fine-tuned" GPT-3, (the autoregressive language model that uses deep learning to produce human-like 5 https://arstechnica.com/information-technology/2022/08/ai-winsstate-fair-art-contest-annoys-humans/, Last Accessed April 2023. 6 https://www.midjourney.com/home/, Last Accessed April 2023.
7 https://openai.com/dall-e-2/, Last Accessed April 2023.
1 3 data-driven (infospheric) environments invites an evaluation of criteria of efficiency, timeliness, and replicability as central to the production process and as particularly valued for what they entail on the wealth and welfare side of consumers and of society as a whole. The so-called instrumental reason; that is, adjusting means to predetermined ends to achieve the best possible outcome, may thus become the benchmark for the entire economic sector (Acemoglu and Restrepo 2020).
In principle, humans are still responsible for decisions concerning the ultimate goals and ultimate choices, but easily find that in the wrapped and datafied microenvironments the whole process revolves around the optimal management of quantitative aspects that can be handled by AI. Speculations about algorithms taking over and altering the purposes for which they were created currently remain science fiction scenarios (Floridi 2022). However, what we may witness in the short term is a culture that may be affected by being increasingly placed in the onlife dimension typical of personal devices, characterized by speed, real time, ever-better performance, and minimization of waiting time or expectations. This has its counterpart in the impatience with slowness and qualitative aspects, with a prevalence of phenomenal aspects over cognitive ones, which distinguish each individual.
Consciousness qua basic feeling of existence, as a background that qualifies all our waking states, seems to be exhibited by at least some living species and, as far as we know, especially by human beings. This is a feature that cannot be replicated or simulated -to date-in artifacts, which however, can be partially exhibited in software as a selective intelligence, sometimes superior, to that of human beings. This is demonstrated by the ability of computers to defeat humans in chess (such as the case with Deep Blue and Kasparov) and even in GO (an abstract strategy board game, where two players play in the attempt to surround more territory than the opponent). These examples show that appreciation for highly developed forms of intelligence also favours the illusion of seeing consciousness where there is none (as in some types of software, e.g., the one in the movie Her, with which the protagonist falls in love) and not seeing consciousness where instead it exists (as in non-responsive individuals) (Lavazza and Massimini 2018).
If we pursue forms of intelligent functionalism (cf. López-Rubio 2018), we might end up morally devaluing the criterion of the presence of consciousness in favour of the presence of intelligence-or at least of full consciousness associated with the ability to exercise intelligent functionalism (Lanier 1995). One can, of course, argue in favour of an ethical position of this kind, but it is not easy to do so without completely giving up moral intuition, even in rationally supervised forms. In fact, moral intuition is what seems to And the same goes for creativity. If we delegate the entire creation and all marketing of -say-a business to a highly efficient algorithm, will so-called creative workers lose their role and over time we will have no more reserves of human creativity? This seems to be related to a certain approach maintaining (perhaps naively) that a "parallel computer" (such as our brain) is capable to produce in ways that are not yet well understood and that exceed the serial capabilities of an analogic computer. However, recent progress on evolutionary computation, especially those grounded on population-based search techniques, seem to suggest the possibility for AI tools (based on parallel processing) to find creative solutions to very practical problems of the real world (Miikkulainen 2021). Evolutionary computation, especially if complemented by deep learning (Schmidhuber 2015;LeCun et al. 2015) can process data both synchronically (in parallel) and diachronically (evolutionarily). It has been observed that population-based search methods based on evolutionary computation can scale better than other machine learning approaches (Miikkulainen 2021, p.163). This suggests that soon we should see many applications of these AI tools to problems directly involving human creativity in numerous fields, such as engineering (Dupuis et al. 2015), healthcare (Miikkulainen et al. 2021), finance (Buckmann et al. 2021), or even in agriculture (Johnson et al. 2019).
There is thus a question of whether the gradual reduction of the creative roles entrusted to humans in highly data-driven environments will lead to an increase in overall system efficiency and increasing consumer satisfaction. Or whether, instead, it may leave uncovered an important part of the innovation that proceeds with the single, unpredictable insights of a few individuals of genius. In addition, the relative untapping of the creativity of workers, who have become executors of the new ideas produced by automated systems, could induce a lowering of the motivation and mood of workers themselves, who will become less and less involved in the production (and decision-making) processes; therefore unable to devise answers even to decisions that for now are still entrusted to humans.
---
Efficacy and Instrumental Reason Issues
The goal of efficiency, as mentioned above, drives the creation of new environments in which AI-based technology may prevail. It is not necessary to refer to Marx's works to consider how relevant the means of production and the relationships between workers and production processes typical of a given era can be in shaping culture and other types of relationships in society. The logic inherent in the increasingly pervasive application of AI across wrapped and 1 3 of algorithmic (vs. human) management reduces prosocial behavior (e.g., the tendency to help other workers)". In addition, "negative effect (i) occurs because the use of algorithms to manage workers leads to greater objectification of others, (ii) also occurs when algorithms perform tasks together with human managers, and (iii) depends on the type of management task algorithms perform".
Being caught up in the apparent gamification of an increasing number of tasks and functions through digital technology may lead to an overvaluing of instrumental reason at the expense of a search for ends and values to which one can give motivated and thoughtful personal adherence. Muldoon and Raekstad (2022) proposed the concept of "algorithmic domination", where an individual "is subjected to a dominating power, the operations of which are (either in part or in whole) determined directly by an algorithm". Also gamification permits employers "to intervene at a more minute level in ways that are not feasible if required to be undertaken by a human supervisor".
In this scenario, the business sector seems to be destined to be increasingly pervaded by AI. Producing quantifiable, guaranteed, and predictable results is one of the main goals of deeply wrapped datafied environments, a goal that tends to leave no room for uncontrollable and uncontrolled personal paths. Such a scenario, we maintain, requires careful ethical evaluation and constant scrutiny to avoid that a single efficientistic view (incapable of an inclusive look at every human being) may prevail.
---
Conclusion
As AI becomes ubiquitous in society, possibly leading to the formation of increasingly intelligent bio-technological unions, there will likely be a coexistence of a plethora of micro-environments wrapped and tailored around robots and humans. The key element of this pervasive process will be the capacity to integrate biological realms into an infosphere suitable for the implementation of AI technologies. This process will likely require extensive datafication.
This trend can help to meet an increasing number of needs of a growing share of the population by improving the efficiency of production processes and introducing into them elements of quantification, predictability, reproducibility, and minimization of error and imperfection. All this, however, can also trigger unintended and suboptimal consequences. In this paper we have considered four such consequences that seems to be crucial for decision-making processes in future human societies dominated by AI technologies.
The datafication required to realize the quantification and application of AI resources implies increasing control of the provide us with the basic preconditions of moral reasoning; that is, the fact of sharing at least some of the basic values of the subjects involved (Audi 2015). The latter fact is mainly due to the fundamental quality of living beings: consciousness. And consciousness is something that intelligent artifacts seem to lack, even though they can mimic moral reasoning at a cognitive level.
One consequence of this shift toward quantification, efficiency, speed, and continuous connection is the projection of these machine characteristics to which we have become increasingly accustomed onto our fellow human beings. Tolerance for those who are lower performing, less able to keep up with the pace of the AI systems of which we are gradually becoming a part may be diminishing, starting precisely in workplaces built around automation and possibly extending to the wider society (for instance, in terms of systems revolving around social credit, which may also be based on work performance) (Shew 2020;Nakamura 2019). In those contexts, predictability and reliability are prioritized and measurement ranks first among the system's capabilities. What does not fit within the parameters, what slows down or hinders the flow of the process will tend to be pushed aside, expelled, or not even recruited.
There are several levels at which this selection based on efficiency and instrumental reason can take place. There is a more trivially physical one: those who cannot handle the pace of automation cannot participate in the work process. People affected by different forms of illness or disability, the elderly, and those who fall below minimum performance standards will have difficult access to the labour market and, more importantly, may be seen as less useful to society at large, reversing a trend toward inclusion that has been taking hold recently Stypinska 2022;Farina and Lavazza 2022a, b,c;Farina and Lavazza 2021a).
The same, and perhaps to a greater extent, may happen at the cognitive level, as pointed out earlier. The inability, for various reasons, to keep up and be deeply attuned to the wrapped around and datafied environment could lead to the marginalization of those who manifest such detachment from the new AI-colonized context. This is not an inevitable outcome, but it is a risk that can already be glimpsed in a push for a "digital uniformity" that comes from the now compulsory reliance on electronic devices and indeed even social media with varying forms of indirect control and public exposure.
Heßler and colleagues (2022) noticed that "increased importance of empathy and autonomy leads to a higher degree of algorithm aversion. At the same time, it also leads to a stronger preference for human-like decision support, which could therefore serve as a remedy for an algorithm aversion induced by the need for self-humanization". In recent lab experiments Fuchs (in press) found "that the use are quintessentially social beings, who are bound to have contacts with their peers to find satisfaction, often in free and unstructured interactions. The cancellation of these interactions can trigger a reduction in their well-being far greater than the support they could get from the intelligent tools located in increasingly datafied environs.
So, as mentioned above, potential risks exist that need to be addressed pre-emptively as they seem to be inherent in structural trends (and aspects of decision-making processes) based on the widespread diffusion of AI in society. It is the task of philosophy and ethics to help analyse these risks, highlight their contours, and propose solutions so that artificial intelligence may become a valuable complement to human activities, favouring (rather than hampering) social harmony and moral good.
individual involved in the production process and quite possibly over her life. This loss of privacy is typical of new datafied and infospheric environments, where it is not necessarily pursued with the explicit purpose of monitoring the individual (surveillance) but rather of actively predicting her behaviour (dataveillance), by having the individual herself interacting effortlessly with a wide range of integrated technological tools across multiple domains and dimensions.
This need for control may also result in a loss of freedom understood in the classical sense as the possibility of deciding between alternative courses of action. Indeed, the worker must manifest maximally predictable behaviour for her contribution to be as effective and integrated as possible. Freedom, in this context, may become structurally endangered as an end-product, especially in production processes, which are increasingly oriented toward maximal certainty (which is the opposite of freedom).
Another consequence of this production arrangement is the delegation of creativity to algorithms, which are often presented as higher performing, hence preferable to humans because -unlike humans-they are not subject to quantitative and qualitative fluctuations. The risk here is a loss of the reserve in terms of qualitative resource on the part of workers, which -in the long term-could leave some creative areas uncovered, especially those where machines are not (yet) at the level of productive intelligence of humans.
Finally, a more general cultural tendency to favour efficiency and instrumental reason might assert itself because of the structural constraints that environments wrapped around AI tend to produce. A less inclusive and tolerant society could be the result of our onlife characterized by immediacy and absence of expectations, a world -in briefwhere common-sense will leave space to pure objectivity and absolute neutrality based on algorithmic efficiency.
In this vein, datafication points toward an automation of decision-making that makes it primarily efficiency-driven toward predetermined goals. One strategy to rebalance this trend could be to create areas of decision-making that are removed from extreme datafication to allow a process of decision making driven by the choices of individuals without the close guidance of AI. For we know that a strong sense of agency is inherent in human beings, consisting of (presumed) conscious control over their choices and courses of action. The deprivation of this sense of agency usually leads individuals to a reduction of their own well-being (Creed and Klisch 2005).
If, therefore, the efficiency of economic organization is not to become the first and only goal of the social system, with the consequences just highlighted, it is necessary to prevent a form of automated decision-making (based on datafication) from becoming the only method for choices in working environments and in societies in general. Humans manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted 1 3
---
Data Availability N/a.
---
Author contributions
The authors contributed equally to the writing of this paper.
Funding N/a.
---
Declarations
|
This study develops a framework for understanding user intentions and behaviors within a virtual world environment. The proposed framework posits that the intention to participate in virtual world is defined by a person's 1) social identity, 2) attitude toward using the service, 3) subjective norms, 4) attitude toward advertising on the service and 5) enjoyment. The proposed model is tested using data (n=319) from members of the virtual world environment. The results support the multidimensional view of social identity and show a strong positive association between social identity and intention and social identity and behavior, and further, confirm the intention-behavior link. Moreover, the results indicate that social identity outweighs the significance of a person's attitude and relevant subjective norms in explaining intention and behavior. The results also indicate that enjoyment strongly explains both ease of use and attitude. | Introduction
During the past few years, we have witnessed a remarkable increase in the number of users in virtual worlds. According to KZero [54], there were 1.921 billion registered users in virtual worlds in the first quarter of 2012, more than triple the number of users in 2009. The largest segment of users (802 million) is between the ages of 10 and 15 [54]. Despite the growing popularity of virtual worlds, there is no agreement on the definition and/or typology of virtual worlds [20], [71]. The numerous contextual descriptions provided by academics, industry professionals and the media, have further complicated agreement on a common understanding about virtual worlds [91]. One of the earliest definitions of a virtual world was that of Schroeder [74] p.25 who defined the virtual environment or virtual reality as "a computer-generated display that allows or compels the user (or users) to have a sense of being present in an environment other than the one they are actually in, and to interact with that environment." Years later, Koster [52] suggests a definition which contains many essential characteristics of a virtual world: "a virtual world is a spatially based depiction of a persistent virtual environment, which can be experienced by numerous participants at once, who are represented within the space by avatars." Castronova [25] adopts a more technologically oriented viewpoint and defines virtual worlds as "crafted places inside computers that are designed to accommodate large numbers of people." Building on the definitions provided by Bartle [16], Koster [52] and Castronova [25], and including an emphasis on the people and their social network, Bell [20] defines virtual world as "A synchronous, persistent network of people, represented as avatars, facilitated by networked computers." Against this backdrop, social networking sites, such as Facebook and LinkedIn are not virtual worlds. Although not without its critics [19], social networking sites (SNSs) are defined as "web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system" [22]. Thus, SNSs constitute virtual communities which have persistence, but no sense of synchronous [20].
Keeping the Bell's [20] definition of virtual worlds in mind, massively multiplayer online role-playing games (MMORPG) like World of Warcraft or Ultima Online are virtual worlds. This applies also for MMO games. However, there is a discussion about whether a distinction should be drawn between game-based worlds and non-game worlds. Some researchers [51], [77] argue that virtual worlds are essentially non-game environments where divergent games can be present but are not the defining characteristics of the world. Instead, MMORPGs are subject to precise gaming rules, and therefore, they are essentially games. Even though, some MMORPGs provide opportunities for social networking, the game element is central to their functioning [47].
The growing number of Internet users and popularity of virtual worlds mean that more and more people are becoming involved in different types of virtual environments. This also provides new opportunities for businesses to market products and services in these virtual worlds [38], especially if it can be exposed that product placements in virtual worlds are more effective at generating sales and brand loyalty than static marketing channels, such as print and web-based advertisement [92]. Even though, little is known how to effectively market to virtual world participant through avatar-oriented activities, organizations and marketers should consider the online opportunities of marketing to the inhabitants of virtual worlds, as the avatars of users represent prospective targets of current and future business.
This raises a number of interesting research and practical questions how companies can market themselves, their products and services within virtual world's environment by making sense of the unique features offered with this new medium [92]. Previous research has investigated online interaction in different types of virtual communities, such as text-based [10] and network-and small group-based virtual communities [11], [30]. On the other hand, research has also investigated several high-interactivity online venues (real-time chat systems, web-based chat rooms and networked video games), and low-interactivity online venues (e-mail lists, website bulletin boards and usenet newsgroups) [13]. Participation has also been examined in special contexts like software user groups [12], and from educational perspectives [93].
Viewing the phenomenon through the lens of social psychology, this study examines the underlying motives of users for participating in virtual worlds, utilizing an applied version of the frameworks presented by Dholakia et al. [30] and Bagozzi and Dholakia [11], [12]. These frameworks were developed to examine user motivations and behaviors in virtual worlds, and are related to the model of goal-directed behavior [69]. Participating in virtual worlds is perceived as intentional social action influenced by several social determinants such as attitude, subjective norms, perceived behavioral control, enjoyment, entertainment value, ease of use and social identity.
In the current study, authors adapt the Bell's [20] view of virtual world, which builds on synchronicity, persistence, network of people, avatar representation and facilitation of the experience by networked computers. The authors investigate the users of 2-D virtual world called Moipal aimed at users between the ages of 10 and 15. Moipal is not a MMORPG in a sense that users' story or narrative unfolding within the strict constraints of the rules and goals set by the designers. Instead, Moipal has the elements of both a fictional and physical world and exists primarily as a place for social interactions to occur. However, Moipal is not based on a social platform like Facebook, and therefore it is not a social game. Authors identify Moipal as a virtual world environment which can be classified within the broad domain of massively multiplayer online games (MMOG). It can also be tagged with the label multi-user virtual environment (MUVE) [62]. Moipal offers its players a virtual world environment to do everything from playing minigames to meet new and existing virtual friends, to exploring many public spaces available to them. Moipal experience consists of many parts, which are all inextricably linked. Apparently, The Sims Online [60] has been a role model for Moipal. Moipal was launched in October 2007. There were around 120 000 users in Moipal at the end of the year 2008. Moipal was shut down in September 2011.
Moipal is free to play, but registration is mandatory. At the initial sign-up, each player selects the look and style of an avatar, called Pal, from a wide range of options, including gender, hair and skin color, clothing, facial characteristics and body type. Pals are automatically given a personal home upon sign-up and invited to personalize it with a variety of furniture and accessories like rugs, lamps, posters and plants. Pals' residences are located in the virtual world called Pal City. The City provides Pals dozens of different places to visit and opportunities to carry out wide variety of tasks. Pals can visit, for instance, a horse stable, library, cinema, film studio, radio station, city hall, restaurants, museums, art gallery, and holiday resort. By completing tasks related to different places, Pal can earn Pal-money to buy new furniture or clothing from divergent shops located in a shopping mall called Pal Store. The tasks are extremely diverse, ranging from eating pizza at Joe's pizzeria, dancing at Cube Club, having snowball fights in Iceland, training karate at Dojo to feeding dinosaurs at Museum of natural sciences. For nurturing social interactions, Moipal provides communication opportunities such as chatting and sending PalMail to others. The number of friends is not limited in Moipal. Many Pals also create a group or community around a certain topic such as horse riding, rock star or fashion. Like minded friends were then invited to the group. Non-members are able to request invitation by MoiMail.
Besides the parties Pals could arrange for their friends, plenty of attractive events are organized around Pal City. These include a fashion event at the beach, silent movie festival at Kino Lumiere (cinema), Cross stich exhibition (pixel art created by Pals) at Art Gallery 44, and Palympics sport events in sport field. Pals could play several minigames in Moipal, such as Moipal Racing where a player can drive a car with a side scrolling view. The car can be driven across a track and the driver has to avoid hitting pumpkins and other obstacles on the track. Other minigames include Karate, MoiBand (several instruments), Jump rope, PalPing (ping pong), Locomotion (dancing), MoiPets (virtual dogs), just to mention a few.
In the next section we review the relevant literature to support the development of our hypotheses. This is followed by a discussion of the study's methodology. We then continue with the presentation of the results. Finally, we draw conclusions from the study, outline its main limitations and offer ideas for further research in this area.
---
Goal-directed Behavior vs. Experiential Service Use
Although Bagozzi and Dholakia [9] state that consumer behavior is predominantly goal-directed because goods and services are purchased with a certain goal in mind, it is important to note that not all consumer behavior is based on this utilitarian and information-processing view. As noted by Holbrook and Hirschman [39], using the information processing perspective to explain consumer behavior might not always be the appropriate choice in settings which include playful leisure activities, such as gaming [67]. According to this experiential view of consumer behavior, consumption is viewed as a subjective state of consciousness that includes various symbolic meanings and hedonic responses. As pointed out by Holbrook and Hirschman [39], it is important to recognize and also to contrast the two views of consumption: the information-processing and the experiential view. As this paper is interested in volitional behavior in an experiential service setting (gaming) in which consumer behavior is driven by pleasure-seeking, enjoyment and fun, intrinsic motivational factors such as enjoyment are expected to have a stronger effect on intention and behavior than extrinsic motivational factors like perceived utility.
Prior research has modeled participation in virtual communities and the associated behavior from the viewpoint of goal-directed behavior [10]- [12], [47], [69] which suggests that desires predict intentions, and the traditional antecedents of the theory of planned behavior (TPB), namely attitudes, perceived behavioral control and subjective norms influence intention through desires too. The model of goal-directed behavior [69] has since been revised and applied in many studies. In this case, we consider applications that discuss intentional social action in the context of groups [8], virtual communities [10], [30] and online venues [13].
Nysveen et al. [67] p. 336, who studied antecedents to mobile service usage, argue that experiential services are characterized by "ritualistic orientation and hedonic benefits derived from the use of the service, whereas goaldirected services are characterized by instrumental orientation and utilitarian benefits related to the use of the service". On this basis, we now present a framework combining aspects of goal-directed behavior and experiential service use.
---
Conceptual Model and Hypotheses
Building on the research on both goal-directed behaviors [10]- [12], [30] and experiential service use [67], we propose the following framework (Figure 1) to capture the antecedents of intention and behavior in the context of virtual worlds characterized by hedonic pleasure-seeking motives. In the next sections we discuss the model in more detail, develop the hypotheses and review relevant literature to support them.
---
Ease of Use
Perceived ease of use refers to the degree to which a potential user of a certain technology expects the target system to be free of effort [28], [29]. Ease of use is one variable introduced by Davis [28] under the technology acceptance model (TAM), an adaptation of the theory of reasoned action [34]. However TAM focuses precisely on explaining purposive behavior in the context of technology use. TAM also posits that two beliefs, perceived usefulness and perceived ease of use, influence computer acceptance through attitude in the following sequence: first, the design features of a certain technology affect a person's perceptions of its usefulness and ease of use. Consequently the person forms a certain attitude toward using the technology. Finally, attitude produces behavioral response, that is, actual system use.
The effect of perceived ease of use on information system acceptance and use has been studied extensively in the TAM research domain (for a review see [50]. Ease of use has been found to explain a considerable amount of the variance in attitude. In experiential service settings ease of use has been found to have a significant association with attitude toward use and intention to use, but its explanatory power is not very strong with regards to either [67]. In this study, the concept of ease of use is a somewhat complicated because ease-of-use may not exactly reflect the motivation of online games users. Authors acknowledge that "without usability no one can play a game; make it is too usable and it's no fun" [55] p. 319. However, in the case of online gaming acceptance, Hsu and Lu [42] found that ease of use appeared to be the key determinant to predict online game play instead of usefulness. In addition, Hsu and Lu [43] have shown that perceived ease of use appears to have significant effects on both perceived enjoyment and preference to participate in online game communities. They found in their study that easy-to-use interface enhance enjoyment and encourage people to re-participate. On the contrary, difficulties of use make people resist Ease of use H1 The relationship between enjoyment and intention is supported by many studies, particularly with reference to hedonic information systems [1], [46], [81], [84]. Davis et al. [29] argue that users who get enjoyment from using an information system are more likely to form behavioral intentions compared with other users who do not experience as much enjoyment. Perceived enjoyment is also shown as a significant predictor to the intention to use virtual worlds [66], [75].Therefore, we propose that:
H2c: Enjoyment is positively related to intention.
---
Attitude Toward Use and Attitude Toward Advertising
In general, attitude toward a certain behavior such as using a system or service is positively related to intention to engage in that behavior [2]. In computer-mediated environments, many studies state that attitude towards using a system has been found to be the strongest determinant of intention to use that system [28], [67]. With respect to social communication behavior online, Chang and Wang [26] show that a more positive attitude towards the use of online communication tools corresponds to a greater behavioral intention to use them. Their results show that behavioral intention is influenced by perceived usefulness, flow experience and attitude towards use. The factors jointly explain 80 percent of the total variance in behavioral intention, of which attitude alone explains 56 percent. In the same vein, Nysveen et al. [67] propose that attitude toward using mobile services is a strong determinant of intention and usage. In addition, Moon and Kim [63] argue that attitude toward using the Web has a strong influence on behavioral intention. On this basis we propose that:
H3: Attitude toward use is positively related to intention to use.
Attitude toward advertising can be defined as "a learned predisposition to respond in a consistently favorable or unfavorable manner to advertising in general" [57] p. 53. Research on attitude toward advertising has concentrated mainly on three areas: attitude towards ads [57], [59], perceptions of ads in general [33] and brand attitude [58], [64].
Scholars have shown increasing interest in attitudes toward online advertising since its emergence on the Internet. Studies have investigated, for instance, the perceived value of Web advertising [32], different online advertising formats [24] and attitudes toward online advertising [73]. Attitudes toward online advertising have been found to be related to the informativeness and enjoyment of the advertisements [32], [73]. Attitude toward advertising is a strong determinant of, for instance, purchase intentions [57], [59]. Attitudes toward advertising are also found to determine behavioral responses in online [24] and mobile environments [48], [82]. The empirical evidence from prior studies about advertising in virtual worlds is virtually non-existent. However, some studies are conducted in social networking sites. For instance, Kelly et al. [49] networking environment. In their study, many participants indicated that advertising on their online social networking sites was acceptable, because it kept the use of site free of charge. This may apply also for advertising in virtual worlds. Thus, we suggest the following hypotheses:
H4: Attitude toward use is positively related to attitude toward advertising.
H5: Attitude toward advertising is positively related to intention.
---
Social Identity
Social identity theory is a social-psychological perspective developed by Tajfel and Turner [79], [80]. It defines how people classify themselves and others into various social categories. The social classification comprises two functions. The first function gives the means for a person to define others by cognitively segmenting and ordering the social environment surrounding them. Second, social classification helps individuals to define themselves in the social environment [6].
Originally, the model of goal-directed behavior [69] included only one social variable, namely subjective norm. However the construct of social identity was added to the model by Bagozzi and Dholakia [10]. The purpose in adding the variable was to make the model suitable for examining group actions. Dholakia et al. [30] state that social identity captures the main aspects of the individual's identification with the group in the sense that the person comes to view himself or herself as a member of the community and feels that he/she belongs to it.
Bagozzi [8] states that social identity evolves through self-categorization processes that define how members think and feel about themselves, how other in-group and out-group members are perceived and how one acts in relation to in-group and out-group members. Bagozzi divides social identity into three components: self-categorization, affective commitment and group-based self-esteem. These were later re-defined into cognitive, affective and evaluative social identity [11], [12]. Cognitive social identity refers to self-awareness of membership in a social group; selfcategorization is related to affective social identity that presents the emotional feeling of belonging within the group, while evaluative social identity refers to a person's positive and negative value connotation related to group membership, that is, collective self-esteem. Research has tested the validity of these measures [14], [21].
Dholakia et al. [30] completed a study of social identity in the context of network-and small group-based virtual communities. Their model tested the motivational antecedents and mediators of group norms and social identity forms (cognitive, affective and evaluative). They hypothesized that higher levels of value perceptions lead to a stronger social identity regarding the virtual community. The results of their study supported the hypothesis and revealed that purposive and entertainment value determined social identity in the relevant context.
Against this backdrop, we propose that social identity is comprised of cognitive, affective and evaluative social identity [11], [12] and hypothesize that:
H6: Social identity is positively related to intention.
H7: Social identity is positively related to behavior.
---
Subjective Norms, Intention and Behavior
The second determinant of intention in the theory of planned behavior is subjective norm, which refers to the influence of one's personal community on the specified behavior [2]. Bagozzi and Dholakia [11] note that group norms might be an important aspect of social influence in small group brand communities, and therefore call for research on the effect of subjective norms on intention. In a virtual community context, the member's subjective norms affecting the intention to perform a certain behavior might be the approval or disapproval of the other members. According to Ajzen [3], normative beliefs are the antecedents of subjective norms. If a person assumes his or her referents think he or she should perform a certain behavior, the person will perceive social pressure to do so. On the other hand, if a person supposes his or her referents would disapprove of the behavior, the person will have a subjective norm applying pressure not to perform the behavior in question. Therefore subjective norm is a social factor that affects a person's intention to behave in a certain manner.
A number of studies indicate that the influence of peers on behavioral intention related to entertainment services is stronger than the influence of other subjective norms, such as parents or comparative referents [35], [61]. Peer influence has been a significant predictor of intention and behavior in the mobile entertainment services setting [17], [48], [72]. In addition, subjective norms have been found to predict user behavior in online games [42], blogs [41] and virtual communities [27]. Recent literature has also found subjective norms to be a significant factor in the user adoption of virtual worlds [18], [45]. As a result, we put forward the following hypotheses:
H8: Subjective norms, especially peers, are positively related to intention.
---
Methodology
The data was collected from the users of a virtual world called Moipal. The survey was promoted via a banner advertisement in the gaming world. The players were encouraged to click on the banner and complete the questionnaire. As an incentive to answering the survey, the respondents were entered into a lottery for a gaming console. Notes on the questionnaire form advised respondents that the purpose of the study was to examine behavior and attitudes in the context of virtual communities. The respondents were asked to devote about ten minutes to completing the survey form. As regards the research ethics, the fact that a majority of the Moipal users are underage was taken into account when designing the survey. First, the survey was completely anonymous. To further ensure anonymity, Moipal user name, i.e. Pal's name, of the respondents was not requested at any point in the survey. Secondly, with the exception of the background questions on gender and age, no questions about the respondent's offline lives were included in the survey.
A total of 319 acceptable responses were received. In evaluating the response rate in this kind of online survey setting, we compared those who clicked the link to the number of completed questionnaires. By this count the response rate was close to 90 percent. A total of 86 percent of the respondents were females. The mean age of the respondents was 14.3 years. These demographics are in line with the demographics of the registered gamers. Potential nonresponse bias was also examined by comparing early to late respondents [5]. In terms of demographics, the groups do not differ from each other (p<.01) but in terms of the study constructs, the early and late respondents differ in their intention and behavior (p<.01). The results of the mean tests indicate that early respondents have higher intentions to use and are more active users of virtual worlds than late respondents. This finding was expected, as those who answer surveys first usually represent the most enthusiastic user groups. On this basis, we argue that as the survey reached the majority of the active users of the virtual world, as nonresponse occurs mostly among those who are less active gamers. Therefore, nonresponse bias should not be considered a major weakness of the study.
Potential common method variance bias was reduced and examined in various ways as suggested by Podsakoff, MacKenzie, Lee and Podsakoff [70]. First, at the data collection stage the respondents' identities were kept confidential, item ambiguity was reduced and the items were mixed in the questionnaire. Second, in the data analysis stage, we examined common method variance bias through Harman's (1967) one factor test and the partialcorrelation technique. The one factor solution (χ2 = 4451.6 (df=464), p < .00; RMSEA = .146) was inferior to the hypothesized factor structure. In addition, the partial-correlation technique was used to further assess method bias.
As a marker variable we used the item 'There should be no advertising in the virtual world'. Adding the marker variable to the model showed no effects on the observed relationships. On the basis of these two tests, it seems that common method variance bias is not a problem in this study.
---
Measurement Scales
All the items were measured on seven-point scales with a 'do not know' option. In some questions, a semantic differential scale was used instead of a Likert-type scale. In measuring attitudes, items were adapted from Bagozzi and Dholakia [10], [11]. Cognitive, affective and evaluative social identity constructs were all measured, with two items each adapted from Bagozzi and Dholakia [11] and Dholakia et al. [30]. In measuring ease of use we adapted a three item scale from Davis [28] and Davis et al. [29]. Enjoyment was measured with a three item scale taken from Nysveen et al. [67]. In measuring attitudes toward advertising in the virtual world, we used a semantic differential scale adapted from Ajzen [4]. Subjective norms were measured on a three item scale taken from Ajzen [4] and Bagozzi and Dholakia [11]. Intentions and behavior were both measured with items adapted from Bagozzi and Dholakia [10] and Dholakia et al. [30].
---
Convergent and Discriminant Validity
The measurement model showed acceptable fit (χ2 = 707.6 (df=332), p < .00; RMSEA = .060; SRMR = .043; CFI = .987; IFI = .987; RFI = .971). The fit indices (Table 1) associated with the CFA exceeded acceptable thresholds [23], [44]. Only the chi-square value was problematic, but researchers have suggested looking at other fit indices like the RMSEA value if the chi-square test is not passed [31], [83]. The RMSEA statistic for the measurement model was below the cut-off criteria of .08, indicating a relatively close fit of the model [23]. The Cronbach's alphas were larger than or equal to .72. Following Dholakia et al. [30], composite reliabilities (CR) were calculated for two item scales. All CRs were larger than the recommended cut-off criteria of .60 [15]. Therefore the scales show sufficient internal consistency. The indicators in the model loaded highly on their hypothesized constructs, and were significant. In addition, all the average variance extracted (AVE) values were over .50 (ranging from .61 to .76). On this basis, the confirmatory factor analysis shows acceptable convergent validity. Discriminant validity was assessed by looking at the correlation among the constructs (Table 2) and square roots of AVE values. All the AVE square root values were higher than the correlations among constructs, indicating acceptable discriminant validity [36]
---
Structural Model Assessment and Hypotheses Tests
The structural model fit was acceptable (χ2 = 835.1 (df=360), p < .00; CFI = 0.984; NFI = .972; NNFI = 0.982; IFI = 0.984; SRMS = .07; RMSEA = .064) [44], [23]. Hypothesized path loadings, their respective t-values and R2 values are shown in Figure 2. Of the nine hypothesized relationships, six turned out to be statistically significant. H1 contended that there is a positive and direct relationship between ease of use and attitude. No support for the relationship was found. There are two possible explanations for this. First, this insignificant path might be explained by the strong relationship between enjoyment and attitude. Studies have found that in experiential settings, enjoyment plays a stronger role Ease of use R 2 = .32 Attitude R 2 = .73 This paper is available online at www.jtaer.com DOI: 10.4067/S0718-18762013000100002 than ease of use in determining attitudes and behavioral intentions [67]. Second, in technology acceptance research the effect of ease of use on attitude and intention is often weaker than the effect of usefulness, as the effect of ease of use is mediated through usefulness [50].
In line with the literature [1], [85], [86], [94] we find strong support for H2a, which proposed that enjoyment is positively related to ease of use. To test the reversed path (PEOU→PE), a competing structural model was estimated. The competing model showed a significantly worse fit than the hypothesized model. On this basis it seems that, in experiential settings, perceived enjoyment has a significant impact on perceived ease of use, and not vice versa.
With respect to H2b, the path shows that enjoyment is positively related to attitude (β = .87, t = 13.8). This path is extremely strong and indicates that enjoyment is a stronger determinant of attitude than is perceived ease of use. This finding is supported by the literature which has found that enjoyment plays an important role in user acceptance of technology, especially in the case of hedonic systems [78].
There is no evidence to support H2c, which proposed that enjoyment is positively related to intention. This finding echoes Venkatesh et al. [90], who found no support for the direct relationship between perceived enjoyment and behavioral intention. However, that study supported the view that the effects of enjoyment are fully mediated by perceived usefulness and perceived ease of use. H4, arguing that attitude toward use is positively related to attitude toward advertising, was supported (β = .68, t = 12.6). No support was found for H3 which argued that attitude toward use is positively related to intention to use. In contrast with the findings of prior studies on virtual world usage [45], [76] there was a non-significant effect of attitude in predicting the intention to participate into virtual world environment. However, Mäntymäki and Salo [66] made similar findings in their study conducted in social virtual world called Habbo Hotel. In the line with Mäntymäki and Salo [66], we suggest that a potential reason for a non-significant effect may be that since attitudes develop over time, their role is less salient with young people. Alternatively, it is also possible that intentions to use virtual worlds are driven by affective, emotional, impulsive or habitual factors rather than attitudes.
H5 contended that there is a positive and direct relationship between attitude toward advertising and intention to use. No support for the relationship was found. One potential explanation for this may be advertising avoidance. It may be that young people pay little or no interest in advertising in virtual worlds like they do in online social networking sites [49]. In such a setting, attitudes toward advertising may be less established and thus not exerting a strong influence on behavioral intention.
The next hypotheses proposed that social identity is positively related to intention (H6) and behavior (H7). Both hypotheses receive significant support from the data and are thus confirmed. We found no support for H8, which argues that subjective norms are positively related to intention. Finally, there is strong evidence supporting H9 which contended that intention is positively related to behavior. That path is strong and significant (β = .38, t = 4.6). The non-significant direct effect of subjective norms on intention to participate in virtual worlds was counterintuitive and contrary to recent literature which indicate that subjective norms are a significant factor in the user adoption of virtual worlds [18], [45].
However, the effect of subjective norms on intention has been found to be somewhat inconsistent [40], [87], [88]. For instance, Liang and Yeh [56] found in their study that subjective norm had no significant effects on the continuance intention to use mobile games. In addition, in their examination of e-commerce adoption Pavlou and Fygenson [68] did not find that subjective norms predicted either the intention to seek information online or the online purchase intention. Recently, using data gathered from 3265 survey participants in a social virtual world called Habbo Hotel, Mäntymäki [65] found no effect of subjective norms on continuous use intention on social virtual world. Interestingly, in his study, the research setting and profile of respondents were very similar to the current study. Respondents were female dominated and the majority of respondents were between the ages of 10 and 15. In the line with Mäntymäki [65], we suggest that a potential reason for non-significant effect of social norm may be the fact that the normative influence is not particularly salient in predicting virtual world use. Empirical studies have rather consistently found the influence of subjective norms to be less significant in the continuous phase of technology diffusion, or where the use of the technology is voluntary [53], [89]. Alternatively, as participants in virtual worlds can interact with other people, who just happen to be present in the virtual environment, while not knowing them in real life, and without necessarily forming personal relationships. As a result, anonymity inside the virtual world may reduce the salience of normative influence.
---
Competing Models
Two competing models were tested. Competing model #1t measured social identity as first order constructs. Competing model #2 was run without the social identity constructs.
---
Competing Model #2
The second competing model was run without the social identity constructs (Figure 3). The model fit was acceptable (χ2 = 553.3 (df=220), p < .00; CFI = 0.983; NFI = .972; NNFI = 0.980; IFI = 0.983; SRMS = .07; RMSEA = .069). This model confirms the links between enjoyment and intention to use, attitude and intention and subjective norms and intention that were not established in the hypothesized model but were proposed in the literature [50], [63], [67]. Hence, our three models show that the addition of the social identity construct in technology acceptance models has an effect on the other established causal relationships, for example between attitude-intention and subjective normsintention.
---
Discussion
Consumers are increasingly using virtual online games to spend time and interact with other users. The objective of the study was to examine this issue from the viewpoint of users' intentions to use experiential virtual game services. The developed framework showed that social identity is the strongest determinant of intention and behavior in the study setting. Social identity outweighs the effect of attitudes, enjoyment and subjective norms in explaining intention to use a gaming service. Furthermore, the empirical test of the model successfully validated the multidimensional view of social identity. Our findings further indicate that affective social identity is the strongest indicator of a person's social identity outperforming the effects of cognitive and evaluative social identity. Affective social identity also has the strongest association with intention to use a game service and behavior.
---
Theoretical Contributions
In line with the theory [8], [11] the most notable finding of this study was that social identity is a strong antecedent of intention and behavior in the social virtual world context. Our findings also demonstrate that social identity outweighs the effects that enjoyment, attitude toward use and subjective norms have on intention. We showed that social identity consists of three components, and these functions are important in determining a person's intention and behavior in a gaming world. In line with the theory [11], the most influential component was found to be affective social identity, followed by evaluative and cognitive social identity. Previous studies have identified similar results. Bagozzi and Dholakia [11] found in their study of both Harley Davidson brand communities and non-Harley-driving club members that affective social identity was the strongest part of social identity, while the evaluative component was somewhat less strong and the cognitive component the least strong. They also noticed that customer communities organized around small groups resulted in greater social identification than similar communities of customers organized around a more general topic. In line with Bagozzi and Dholakia [11], then, it can be concluded Ease of use .79 (11.5) .17 ( that customers in small group brand communities are more homogeneous in their psychographic characteristics and therefore have greater social identification. Thus, the strength of social identity in this study may be explained by the psychographic similarity of the examined virtual world participants. In summary, it seems that the finding that social identity is a strong antecedent of intention is more robust when interaction in the group is dense and/or organized around a specific theme or setting [11]- [13].
The strength of affective social identity indicates that a person's intentions to use virtual world may be predicted from his or her feelings of belonging to the group. Thus, if a person feels that he or she belongs to a group in the virtual world, he or she is more likely to visit that world. Affective social identity also showed a direct relation to behavior. This suggests that a person attached to the group to which he or she belongs, is more likely to perform direct behaviors. Evaluative social identity is another important antecedent of behavior. Therefore, the more important and valuable member of the group the person perceives him or herself to be, the more likely he or she is to perform behaviors in the group.
In contexts in which social identity is not present, behavioral intentions can be predicted from attitudes, subjective norms and enjoyment. In other words, these constructs become significant predictors of intention and behavior when social identity is not included in the models, or its role is minimal. This kind of situation may occur when a person interacts with people that he or she does not know very well, for example when joining a new discussion or interest group within the virtual world. Because the group members are just starting to get to know each other, social identity, and especially the emotional attachment to the group, has not yet strengthened. Instead the members' attitudes toward using the service and their perceptions of enjoyment in using it may be better predictors of whether they take part in discussions in the future. Subjective norms may also influence a person's intentions. Thus if the individual member of the group supposes that the other members think that he or she should, for example, take part in later group discussions, he or she will perceive social pressure to do so.
Another important finding is the role of enjoyment as an antecedent of attitudes. In line with the literature [1], [29], [46], [47], [63], [84], [90], [94] the links between enjoyment and ease of use and enjoyment and attitude were strong, suggesting that attitude is influenced by perceived enjoyment. Thus, a person who finds participating in the discussion group enjoyable, for example, is more likely to have a positive attitude toward the service.
The link between intention and behavior was strong in all model tests. This link has been studied extensively in the prior literature [3], [11]- [13]. This study confirms that intention is also an important antecedent of behavior in the social virtual world context.
---
Managerial Contributions
Our study shows that participation in virtual worlds can be predicted from intention which can, in turn, be predicted from social identity. The importance and dominance of social identity were prevalent and this construct outweighs all other constructs tested. Moreover, comparing this finding to the prior literature shows that the role of social identity as an antecedent of intentions seems to be higher when interaction in the group is dense and also organized around a specific theme or setting. From a managerial viewpoint, this implies that developers of virtual worlds should consider building theme-based virtual worlds which are designed to promote a particular type of content among a community or provide more opportunities for theme-based group formation among the participants of virtual worlds. We have identified the following important characteristics for developing virtual worlds and a person's social identity within them. First, developers of virtual worlds should promote the development of social identity among users, that is, a part of one's self-concept deriving from the knowledge, attached value and emotional significance of a certain membership of a social group [79]. In other words, developers should enable and encourage users to get to know each other, make friends and form communities and teams to work together on solving a problem or completing a certain task. To support the feeling of belonging within the groups, which refers to the affective side of social identity, the developers and administrators of virtual worlds should allow groups to interact without restrictions, for example by allowing the users to interact vividly both verbally (text-based) and nonverbally (gestures and expressions). In addition, a highly personalized graphical user interface and making it possible to design group logos, for example, would support social identity formation in the virtual world context.
The results can also be viewed in the light of marketing communications. The hypothesized link between attitude toward advertising and intentions was not significant, which indicates that a person's intention is not affected by his or her attitude toward advertising in the virtual world. From the advertising point of view this finding suggests that regardless of how disruptive, sensitive, harmful or beneficial advertising in the virtual world is, it has no direct relationship with intention to use the service. For advertisers, this finding could have both positive and negative implications. As the users do not change their intentions on the basis of advertising on the service, marketers may be launching ads perceived as disruptive. These kinds of ads can be, for example, pop-ups or floating ads. However, effective marketing in virtual worlds might just call for more sophisticated forms of advertising. As social identity was the central determinant of intention and behavior, marketing should support the development of the users' social identity by reinforcing the users' perceptions about their belonging to and importance in a group. This type of advertising may involve games that require players to form social groups. Another important finding for marketing communications is that perceived enjoyment affects both the perceptions of ease of use and attitude toward using
---
Study Limitations and Future Research
The empirical assessment of our framework should be interpreted in light of several limitations arising from our sample, common method bias and direction of causality. First, our study used a convenience sampling method which yielded a sample that is very much dominated by females (86 percent) and the young. One can expect that preteens' responses to surveys might be superficial. However, we found no biases in the answers due to respondents' age. Therefore the results cannot be generalized to other populations. To be more certain on possible answer bias, a comparison sample should be collected. Second, although common method bias was minimized, its impact on survey study results could only be completely ruled out if longitudinal data were used. Third, the direction of our causal relationships is based on theory rather than on mathematical caveats. However, we were able to also contribute to the discussion around the direction of causality between the constructs' enjoyment and ease of use.
The limitations and findings presented offer important opportunities for further research. We propose researchers further validate the links between social identity and the other constructs considered in the study. Specifically, prior studies have not conceptualized and therefore not tested the association between social identity and enjoyment, or social identity and attitude. As theories have not examined these aspects before, further work is needed to capture the links between social identity and the other constructs. Research has mostly concentrated on modeling the links between social identity and intention [11], [13]. In addition, we propose more research on the concept of social identity in experiential service settings. Previous studies have merely modeled social identity in the context of goaldirected behavior and not in experiential services including pleasure-seeking and hedonic user experiences [11], [13].
Finally, although this research has incorporated a variety of constructs into the developed framework, it seems that other factors may also exert an influence. As such, the exploration of differentiated service dynamics in alternative contexts seems a potentially fruitful avenue for research. Finally, it would be interesting to develop better understanding of how to grow the user base in virtual worlds. Especially, how can new users be attracted and what are the key issues at this stage? This calls for more research prior to exposure to the virtual world and prior to the social influence exerted by other users of the virtual world. |
Overtourism is an increasingly relevant problem for tourist destinations, and some cities are starting to take extreme measures to counter it. In this paper, we introduce a simple mathematical model that analyzes the dynamics of the populations of residents and tourists when there is a competition for the access to local services and resources, since the needs of the two populations are partly mutually incompatible. We study under what conditions a stable equilibrium where residents and tourists coexist is reached, and what are the conditions for tourists to take over the city and to expel residents, among others. Even small changes in key parameters may bring about very different outcomes. Policymakers should be aware that a sound knowledge of the structural properties of the dynamics is important when taking measures, whose effect could otherwise be different than expected and even counterproductive. | Introduction
The rapid increase of global mobility that has characterized the mature phase of the globalization process in the past couple of decades has also, as a consequence, led to the escalation of 'overtourism' issues in many global tourism destinations, and most notably in major art and heritage cities. Despite massive flows of tourists clearly benefit the local economy, they also pose a major threat to both the livability and, in some cases, even the sustainability of cities that are literally consumed by a level of human occupancy they weren't designed or intended to host. In Barcelona, where the number of overnight stays escalated from 1.7 millions in 1990 to more than 8 millions in 16 years, overtourism is one of the key causes of an environmental pollution emergency (Ledsom 2019). In addition to the most renowned tourist locations, the geography of overtourism is also rapidly expanding due to the global visibility acquired by some cities for having been the shooting location of successful TV series, as in the case of Dubrovnik for Game of Thrones (Wiley 2019). However, an increasing number of critical voices are questioning this trend, locally as well as internationally (Economist, 2018). For residents, overtourism may have dramatic consequences. Housing for permanent residential use becomes increasingly scarce and expensive. Services catering to the needs of locals become rarer, more difficult to reach, and again more expensive. The constant noise and the overcrowding of streets and local transport can be a source of considerable stress for working people, families with small children and the elderly. In cities like Venice, the number of bed-and-breakfasts and flats for short-term tourist occupancy has nearly doubled in the space of just one year (Tantucci 2018). As a consequence, residents are evicted by landlords who find way more profitable to rent to tourists. In Florence, for instance, between October 1, 2017, and June 30, 2018, as many as 478 residents who couldn't keep up with the rising rents had to leave their homes, including lifetime ones: 209 living in the historical center, 71 in the Unesco area and 198 in other areas of the city (Conte 2018). More generally, the so-called airification (Picascia et al. 2019) has been identified as a disruptive force that is literally 'hollowing out' cities (Hinsliff 2018). Such a state of things does not come as a complete surprise to the tourism studies literature. Although early warnings were appropriately sent, as in the seminal paper by van den Borg et al. (1996), they have not succeeded in convincing local policy makers to devise appropriate countervailing strategies and to take action. Now that the negative effects of the phenomenon are becoming indisputable, however, some cities are starting to react aggressively. Amsterdam has banned the concession of new licenses to business within the historical city core that offer goods and services targeting tourist demand (O'Sullivan 2017), as a way to curb the 'Disneyfication' of the city (Boztas 2017). Bruges has strictly limited the maximum number of cruise ships that may be hosted at its port's docks on a daily basis and has limited its own tourism-related advertising in major nearby cities (Marcus 2019). Venice has implemented a very severe set of restrictions to many different kinds of tourist misbehavior, sanctioned with heavy fines (Spinks 2018). Ten major European heritage cities such as Amsterdam, Barcelona, Berlin, Bordeaux, Brussels, Krakow, Munich, Paris, Valencia and Vienna have jointly signed a letter to the new EU Commission asking for severe limitations to the further expansion of Airbnb and other holiday rental websites (Henley 2019). However, it is not easy to go against such a powerful trend, despite that the current COVID-19-related crisis that has caused a temporary collapse of the tourism industry worldwide will probably provide overcrowded tourist cities with an unexpected opportunity to prevent the eventual return to the 'old normal' once the pandemic is over (Higgins-Desbiolles 2020). The vested interests that rely upon the extractive logic of the mass tourism economy are a major local consensus pool and exert powerful political pressure (Benner 2019). On the other hand, the needs of tourists and residents significantly differ, and this is likely to spark conflict between different local stakeholders, depending on the extent to which they benefit from tourism (Concu and Atzeni 2012). Whether or not a city eventually gets colonized by the tourism economy or manages to find a reasonable compromise can therefore be the result of a very complex interplay of factors. It is therefore of particular importance to study under what conditions such interplay leads to different long-term scenarios, thus enabling public decision makers to better understand not only the nature of the problem in order to imagine and test possible solutions, but also the critical conditions that regulate the emergence of possible outcomes. Merely proposing 'plausible' or 'just' solutions is not enough. We also need to assess whether such solutions would work, and under what circumstances, once they are actually implemented. In principle, solutions that are more desirable in abstract terms need not be the ones that work best. As cities are very complex dynamical systems, the pursuit of the public interest, which in this case identifies to a significant extent with that of city residents, whose 'right to the city' (Lefebvre 2010) should be the object of special consideration and protection, needs to be supported by evidence-based policies building upon a sound understanding of the underlying economic and social dynamics.
The aim of this paper is that of studying a simple dynamic model that analyzes the effects of the tension between residents and tourists in the social usage of city resources. We focus on the interplay of the essential factors behind such tension: the substitution between resident-oriented and tourist-oriented facilities and shops, the congestion of city space from overtourism, but also the experience value of cities as related to the effective presence of residents as a source of authenticity. Given that the escalating tourist flows are literally preying on the city's resources from the residents' viewpoint, it is natural to think of modeling such dynamics with the predatorprey framework in mind. We introduce an expanded variant of the predator-prey dynamics, which yields more complex dynamic behavior than the original one, and allows a better analytical treatment of the main factors at play. The model's structure is easily interpretable, but the corresponding dynamics are not obvious. In particular, we show that the actual dynamic trajectories of the system may be very different for relatively small changes in the key parameters. This implies that even relatively small differences in local conditions and in policy actions may cause divergent outcomes, with substantial differences in terms of their social desirability. Our results should be read as a cautionary tale against delayed or unsystematic action in curbing the social costs from overtourism: intervening too little or too late, or not focusing on the truly critical parameters might lead to disappointing results.
The remainder of the paper is organized as follows. Section 2 offers a brief review of the main issues discussed in the overtourism-related literature. Section 3 presents the model. Section 4 contains the main results. Section 5 discusses the results and concludes. A technical Appendix closes the paper.
---
Literature review
One vastly debated issue that clearly relates to overtourism is that of residents' attitudes toward tourists. There is a rich literature that explores this topic, but most of it has dealt with minor or even marginal tourist destinations rather than with overcrowded tourist attractors. Lin et al. (2017) focus upon the process of value co-creation through social interaction between tourists and residents in a Chinese sample and find that the positive economic benefits from tourism may also positively affect the life satisfaction of residents. Mathew and Sreejesh (2017), working on a sample of three Indian tourism destinations, highlight the relationship between responsible tourism and perceptions of sustainability of the tourist destination in promoting the perceived quality of life of residents. On the other hand, Boley et al. (2017) show that although destinations that place more emphasis on sustainability tend also to be the more sustainable, perceptions of actual sustainability by residents tend to be low. Rasoolimanesh et al. (2017) show that, for two UNESCO Heritage sites in Malaysia, one of which located in an urban context and the other in a rural one, there are nuanced differences between the urban and the rural site in terms of the impact of residents' perceptions on the support for tourism development or lack thereof, but also substantial homogeneities. Therefore, when tourism is still in a developing phase, the evidence of the benefits from tourism development can be a main driver of support from residents, and this effect may even cut across major territorial divides such as the urban/rural one. As shown by Stylidis et al. (2014) and Wang and Chen (2015), a central mediating role in residents' perceptions of the impacts of tourism is perceived place image-a dimension that is, tellingly, significantly compromised in destinations affected by overtourism, but can be improved by an increased tourists presence in developing destinations. It is no surprise that literature reviews of this research field lament the excessive narrowness of focus of most research, as well as its reliance on specific quantitative techniques that are good at highlighting specific effects but often fail to deliver the big picture (Sharpley 2014). For instance, Almeida Garcia et al. (2015) argue that the current literature on residents' perception of tourism significantly underplays the role of key historical, cultural and social factors in shaping a specific destination and its response to tourism. Moreover, the nature of the 'ecological' interactions between the residents and tourists populations may make a big difference and has an intrinsically dynamic nature (Vargas-Sanchez et al. 2011). And if this is true in general, it is even truer for overcrowded destinations, and the possible scenarios may be very different from one another. For instance, touristic congestion may be the result of a sudden boom or of a gradual, steady increase; the pervasive presence of tourists in the urban space may have become a deeply ingrained feature of the local culture, or be an outcome of recent tourism development strategies; the availability of space and the impact of building density may be not particularly problematic for urban livability or rather extremely critical and exacerbated by tourism flows, and so on, just to limit ourselves to a few obvious examples. Segota et al. (2017) show for instance that the informedness and involvement of residents in the local management of tourism-related issues significantly impacts their perceptions in the expected direction (the more involved and informed, the more positive). A rare example of a study on the acceptability of crowding perceptions by residents in a global tourist destination such as Bruges, carried out by Neuts and Nijkamp (2012) , moreover, shows that the actual negative perception of crowding varies widely across residents depending on individual characteristics and is not found in the majority of the sample. However, the situation might have changed now, in the light of further recent accelerations of tourism flows in many heritage cities, as possibly signaled by Bruges' current de-advertising on the tourism market. Despite this, overtourism and its policy implications are still relatively poorly covered in the literature, with the consequent risk of failing to fully appreciate the complex social conflict issues that can emerge and deflagrate in the absence of proper policy strategies and management at the city level.
The key critical aspect, which is amplified by overtourism but already apparent in developing destinations even in the case of positive residents' perceptions, is the impact of tourism on local culture and behaviors, whose effects can only be appreciated in full in the medium-long run. Of course, culture and behaviors are inevitably bound to change anyway, independently of tourism. But the changes induced by tourism might eventually clash with the developmental priorities and goals of local communities (Simpson 2009. There is a need to strike a balance between the benefits of tourism as a local developmental driver and potentially negative effects, e.g., in terms of longlasting impacts on cultural identity and authenticity (Lacy and Douglass 2002;Cole 2007;Zhu 2012), on socioeconomic inequalities (Lee 2009; Alam and Paramati 2016), on community empowerment (Cole 2006;Aref and Redzuan 2009;Chen et al. 2017), and so on. Especially critical is the evaluation of residents' perceptions in developing countries affected by substantial socioeconomic issues (Truong et al. 2014). Analyses that rely on an exclusively tourism-centric perspective are likely to overlook the most critical dimensions (Easterling 2004). Ribeiro et al. (2017)'s analysis of the development of pro-tourism behaviors of Cape Verde Islands residents is an example in this regard. Nunkoo and Gursoy (2012) instead consider, in the case of Mauritius, the role of local identity in the orientation of residents' support for tourism, but interestingly point out how even the emergence of a supportive orientation need not reflect into a significant shift in attitudes, thus underlying the complex functioning of community identity as a regulator of cultural and social change. On the other hand, tourism itself is constantly raising the bar as to the level and depth of interaction with local social life and customs that tourists expect to reach as a quintessential aspect of their experience, to the extent of becoming co-creators of the experience itself. Prebensen and Xie (2017) show, for example, that the level of tourists' participation under the form of mastering and co-creation in experience tourism significantly enhances their value perception. Paulaskaite et al. (2017) highlight how tourists increasingly expect to spend their time at the destination 'living like the locals,' therefore transforming local identity itself into a commodity that can be purchased at will. Such issues are relevant for all kinds of tourist destinations, but they are especially problematic in overcrowded ones. Overtourism shifts the focus of residents' perceptions on critical aspects such as the pressure of tourism flows on the local system (Muler Gonzalez et al. 2018), the threats to ecological sustainability (Cheer et al. 2019) and the role of media, and social media in particular, in causing tourist congestion peaks on an almost instantaneous basis (Jang and Park 2020). In other words, the aspect of overtourism that is seen as the most socially alarming is its capacity to put under stress at an unprecedented scale and pace the homeostatic mechanisms of local systems on many different levels: economic, social, cultural, logistical, and so on. Overtourism magnifies many of the most critical features of tourism to an extent that strains local governance and regulatory capacity; however, its effects may be more critical on certain dimensions rather than others (Carvalho et al. 2020). When such impact is perceived as disruptive by local communities, social protest ensues (Alexis 2017;Pinkster and Boterman 2017;Seraphin et al. 2018). Once a perceived saturation level is reached, a vicious circle can take over as residents classify as threatening by default any tourism event that causes local congestion, irrespectively of its quality, importance and expected long-term benefit for the city (Lemmi et al. 2018). This kind of vicious circle may mutually reinforce with others, e.g., the one causing the erosion of local services quality in overcrowded tourism destinations (Caserta and Russo 2002). Such social dynamics are difficult to manage at all levels, and even large digital tourism platforms may find it hard to function well (e.g., in rewarding quality in their rankings of local businesses) when the effects of digital influencing upon spatial patterns of tourism congestion spark social controversy (Ganzaroli et al. 2017). Such new, system-wide challenges may be effectively tackled only through tailored, sophisticated forms of local cooperation between key stakeholders (Kuscer and Mihalic 2019) and of smart governance (Agyeiwaah 2019).
---
The model
The literature briefly discussed in the previous section shows how the problem of overtourism, in the more general context of the residents' perceptions of the social and economic impacts of tourism, is generally focused on the analysis of specific case studies and on the measurement of perceptions and attitudes by means of suitable psychometric tools. In this paper, we take a different route as a contribution to a comprehensive approach to the smart governance of overtourism dynamics: that of characterizing such dynamics in terms of an explicit mathematical model. The ambition of the model is not to provide a detailed, realistic representation of overtourism in all of its multifaceted dimensions, but to examine what are the basic conditions that may favor, or prevent, its onset, paying special attention to a basic phenomenon: the competition between resident-oriented and tourist-oriented services for the limited spatial and material resources of the city. As we have seen from the literature review, a detailed modeling of such dynamics would involve many different variables-place identity, social perceptions, local culture, historical trajectories and many more-and this would easily make an explicit dynamic analysis intractable due to the number of potential variables implied. However, simplifying the model to its essentials has the advantage of providing some insight that may help focus upon the possible dynamic regimes that may prevail, providing policymakers with some important indications for policy design.
We choose as our conceptual benchmark the classical Lotka-Volterra predator-prey model, which has been the object of countless applications in a variety of different fields, and of mathematical generalizations of all kinds due to its optimal combination of simplicity of structure and richness of dynamic behaviors. In our case, the predatorprey logic is somewhat ingrained into the nature of the problem we want to analyze, as one thinks of overtourism as the process through which tourism flows literally 'capture' the local system, reshaping it according to their necessities. On the other hand, even a basic description of the overtourism problem urges us to depart from the basic formulation of the predator-prey model to better take account of some essential specificities. In particular, the model we propose has the following structure:
ẋ = r + ax -b(y -ȳ)x + c(x -x) ẏ = s + dx y + e(y -ȳ)x -f y
where x is the level of the resident population and y is the level of the tourist population. All parameters are positive. Let us now see in some detail the rationale behind the equations. The basic premise of the model is that there is an implicit competition between residents and tourists for the availability of services and resources that respond to their specific, and partly mutually incompatible, needs. In particular, there are two threshold values of x and y, x and ȳ, respectively, beyond which the local level of residents (tourists) is large enough to warrant a satisfactory provision of resident-(tourist-) specific services and resources. We call such thresholds the relevance thresholds. When one population crosses its relevance threshold, the local economy becomes increasingly respondent to that population's needs, and this positively influences the dynamics of such population. These two effects are captured, respectively, by the two terms x -x and y -ȳ. So, the level of the resident population is positively depending on whether the residents are above their relevance threshold, and negatively depending on whether the tourists are above their relevance threshold. In this latter case, however, the size of the effect is scaled by the level x of the resident population: the larger the pool of residents, the more an above-threshold level of tourists makes competition for scarce space and resources more sustained, increasing the negative impact of tourists on the resident population. Parameters b and c measure the relative size of the two effects. Moreover, the dynamics of the resident population also linearly depends (according to the parameter a) on the actual level of the resident population, as the choice to live in a city is characterized by some amount of inertia, due to a variety of factors such as relocation costs, habit, cultural and affective reasons, job-related reasons, and so on. As to the tourist population, it positively benefits from the crossing of its own relevance threshold as already anticipated, and the effect is measured by the parameter e. Moreover, its dynamics are negatively influenced by tourism congestion, an effect whose size is measured by the parameter f . Finally, the tourist population's persistence in the destination also depends, and in a positive way, on the level of residents, as measured by the parameter d. Insofar as the resident population is small, the city basically turns into a 'theme park' devoid of any specific authenticity and vitality, to become a mere entertainment district that maximizes tourism-related profit. This effect, as already hinted at in the discussion of the previous section, therefore captures the 'experience economy' dimension, as tourists do not simply ask for entertainment, but also value opportunities of meaningful interaction with locals. Finally, parameters r and s measure the exogenous components of the rates of growth of the resident and tourist populations, respectively.
The system of equations above can be conveniently rewritten as follows:
ẋ = r -c x + (a + b ȳ + c)x -bx y (1) ẏ = s -e ȳx + (e + d)x y -f y (2)
In our analysis, we will refer to (1)-( 2) as the default formulation of the model.
---
Existence and stability of the stationary states
To analyze the dynamic behavior of the model, we start by posing:
A := a + b ȳ + c b , B := c x -r a + b ȳ + c , C := e ȳ e + d , D := s e ȳ , E := f d + e
The complete taxonomy of possible dynamic regimes is illustrated in the following proposition. We will see how even a relatively simple model like the present one can generate a rich array of dynamic behaviors depending on the prevalence of certain constellations of conditions rather than others.
Proposition 1 Under the assumption that all parameters of the system (1)-( 2) are strictly positive, the following dynamic regimes can be observed:
(1) If B > 0 (i.e., x > r c
), then at most two stationary states exist. In particular,
(1.a) if either D < B < E or E < D < B holds, a unique repelling stationary state P exists (Fig. 7b,c) in the Appendix);
(1.b) if B < min{D, E}, two stationary states P 1 = (x * 1 , y * 1 ) and P 2 (x * 2 , y * 2 ), with x * 1 < x *
2 and y * 1 < y * 2 , may exist, where P 1 is always a saddle point, whereas: (1.b.1) if D < E, then P 2 is a repeller (Fig. 7a) and (7e) in the Appendix);
(1.b.2) if D > E, then P 2 is either a repeller or an attractor (Fig. 7d in the Appendix).
(2) If B < 0 (i.e., x < r c ) a unique stationary state exists; if D < E, it is a either a repeller or an attractor (Fig. 7f in the Appendix) while, if D > E, it is an attractor (Fig. 7g in the Appendix).
---
Proof See Appendix
To explain the meaning of Proposition 1, let us start by understanding better the interpretation of the new composite parameters A, B, C, D and E. The parameter A measures the relative size of the parameters that positively regulate the growth of the resident population vs. the parameter b that negatively affects it. In particular, the growth of the size x of the resident population depends positively on the parameter a (measuring the persistence effect), on the parameter c (representing the reactivity to the difference between x and the threshold x and on the parameter ȳ (the threshold for the tourist population). We can therefore intuitively interpret A as a measure of residents' resilience. As for B, it positively depends on the relevance threshold of the resident population (measured by c x): the larger it is with respect to r , and with respect to the parameters that positively influence residents' resilience, the larger B. B can therefore be intuitively interpreted as a measure of residents' susceptibility: the higher B, the more demanding for the residents' community to fulfil the conditions for the prevalence of a resident-oriented local economy. Likewise, C can be interpreted as a measure of tourists' susceptibility, as C is larger the higher the threshold of relevance for tourists e ȳ, and the smaller the combined strength of the experience value parameter d from visiting the city plus the impact e of crossing the relevance threshold on the availability of tourist-oriented services and resources. D can be seen as the city's intrinsic attraction value for tourists, as it equals s (the exogenous growth rate of tourists) scaled by the relevance threshold for tourists. Finally, E measures the tourists' relative congestion effect expressed by the congestion parameter f scaled by the combined strength of the experience value and resource and service availability effects for tourists.
At this point, we are ready to illustrate the findings in Proposition 1. The results are organized around the sign of B, that is, whether or not the residents' susceptibility problem occurs, which implies a relatively high relevance threshold for the residents-oriented local economy to kick off. In the case of a positive level of residents' susceptibility, that is, B > 0, we have at most two stationary states that can be potential equilibria for the dynamics. A first sub-regime relies on two possible conditions at which, of the two possible stationary states, only one exists and is repulsive, that is, the dynamics never settle down to a given state. The two conditions are D < B < E and E < D < B. In the first case, we have a condition where the congestion effects is particularly high with respect to residents' susceptibility and to the intrinsic attraction value for tourists. This is, for example, the case of a relatively small city where, despite the comparatively modest attraction value, congestion is a problem and tourists can crowd out residents relatively easily. In the second case, we have to the contrary a situation where congestion is relatively unimportant and residents' susceptibility is comparatively high in presence of a relatively substantial attraction value. This is for instance a scenario that could describe a relatively large city with high carrying capacity and cultural/amenity value, where there is real competition for local resources and services between residents and tourists. These two conditions may therefore span very different cases.
A second sub-regime contemplates again the existence of two possible stationary points, one of which is always a saddle, that is, a state where a unique converging trajectory exists and all the other ones diverge. The key condition for the second sub-regime is that B be smaller than both D and E. Given that B is constrained to be positive, the condition requires that both the congestion effect and the intrinsic attraction value are relatively high. An example here is that of an established tourism destination with severe congestion problems where however the issue of resource accessibility for residents is relatively less binding, possibly due to a large, diversified local economy that can accommodate local demand. An extra condition regulates the dynamic properties of the second possible steady state, according to the relative size of the two potentially dominating effects. If the dominant effect is congestion, the second stationary state is locally unstable. If instead the dominant effect is the intrinsic attraction value, the second stationary state may either be locally unstable or locally stable, that is, may attract all local trajectories and emerge as a stable state.
When residents' susceptibility is not a major concern (i.e., B < 0), the dynamic regime is much simpler. In this case, the stationary state is always unique. Moreover, if congestion prevails upon intrinsic attraction, this state may be either attracting or repelling (locally stable vs. unstable). If the opposite is true and intrinsic attraction prevails, the stationary state is always attractive. An example of this latter condition is a world-renowned tourist destination, with a large carrying capacity that can manage congestion, and where the competition between residents and tourists for local services and resources is not binding.
Proposition 1 tells us, among other things, that the dynamics we are studying is not in many cases conducive to a stable equilibrium state and is rather characterized by more complex long-run behaviors. The stability properties of the stationary states do not give us enough information to understand what such dynamic behaviors will look like, as they only provide insight about what happens close to them. However, the structure of stationary states is an important piece of information, and in particular, it is interesting to ask how the number and stability properties of the stationary states vary depending on the levels of specific couples of parameters such as the relevance thresholds for residents and tourists, given our focus on overtourism and its possible impacts. In all the analysis that follows, the choice of parameter values for the simulations has been made in order to select cases that enable us to illustrate clearly and in a compact way the dynamic properties of the model.
Figure 1 illustrates the bifurcation diagrams obtained by varying the relevance threshold y. Panels (a) and (b) show how the coordinates x and y (on the horizontal axis) of stationary states vary in response to variations in y (on the vertical axis). The LP point separates the interval of y values where no stationary state exists, from that in which two stationary states exist. The point H indicates the Hopf bifurcation value of y. Dashed, continuous and dotted lines represent saddle points, attractive and repulsive stationary states, respectively. The conditions under which a Hopf bifurcation occurs by varying the parameter y, computed according to the criterion proposed by Liu (1994), are given in the Appendix. In Panel (c) of Fig. 1, we show how, through the Hopf bifurcation, a family of limit cycles emerges. Notice that an increase in the parameter value y leads to an increase in the magnitude of the limit cycles.
In Fig. 2, we show, for a specific set of parameter values, the bifurcation diagram that illustrates the existence and stability of the stationary states as the two relevance thresholds vary. Figure 2a provides the full diagram, whereas Fig. 2b presents an enlargement of the rectangle area where the most fine-grained structure is found. As we can see, the bifurcation diagram contains here all seven possible scenarios for the stationary states, where, in Fig. 2, the apexes (S, A), (S, R), A, R denote, respectively: regions where two stationary states exist, of which one is a saddle (S) and another an attractor (A); regions where two stationary states exist and are in particular a saddle and a repeller (R); regions where one stationary state exists and is an attractor; and regions where one stationary state exists and is a repeller. The H curve is the Hopf bifurcation curve, whereas the LP curve is the one that separates the region without stationary states from the region where at least one stationary state exists. The Hopf bifurcation curve H separates the regions where an attractive stationary state is found (to the left of the curve) from those where a cycle emerges, as shown in more detail in Fig. 2b. In the simulations below, we find that the attractive cycle is stable and the corresponding stationary state consequently becomes unstable. Figure 2c, instead, reports the bifurcation diagram in the (c, e) space, where we study how the structure of stationary states varies with the parameters that measure the strength of resource provision when residents (respectively, tourists) cross their relevance threshold. Again, the bifurcation curve H and the LP curve delimit the areas where one of the stationary states (or the only one, if unique) changes its local behavior from repulsive to attractive, and where stationary states exist vs. fail to exist.
From these figures we see how, in the case of the bifurcation diagram for the relevance thresholds, there is a vast region where stationary states do not exist for most values of the relevance threshold for residents if the relevance threshold for tourists is small enough. That is, when tourists are substantially favored in their capacity to access local resources with respect to residents, the dynamics fails to settle on a stationary state. However, when the relevance threshold for residents is very low, even for relatively high levels of the relevance threshold for tourists a stable stationary state emerges. That is, when residents succeed in getting access to the local resources, the system has a chance to stabilize itself. But when the relevance threshold for tourists or even both thresholds become very high so that it is difficult for both populations to gain easy accessibility to local resources, there is no chance that the system may settle down to a stable equilibrium.
In the case of the bifurcation diagram in the (c, e) space, the pattern is more complicated, and the existence of stable stationary states here relies on more specific combinations of the two parameters. In general, when c is very high, that is, when the access to resources beyond the relevance threshold has a big positive impact on the population of residents, no equilibrium exists, whereas for smaller values of c a stable stationary state can emerge. Again, when both parameters are large, no stable stationary state can be found. Remember that these bifurcation diagrams are drawn for a given choice of numerical values of all the other parameters, and that they change as any one of the other parameters varies.
To get a better understanding of what the actual trajectories of the system look like, we report a few examples of phase diagrams for a specific choice of parameter values in Fig. 3. In particular, we keep the values of all the other parameters but the relevance thresholds as in Fig. 2, and we set a specific value for x = 4, letting ȳ vary. The four cases correspond, respectively, to points from the white, yellow, indigo and orange regions of Fig. 2b. For ȳ = 0.8 (white region in Fig. 2b), for most initial conditions the system converges toward states where the resident population goes extinct and only a stable level of tourists is observed: this is a full 'Disneyfication' scenario where the city turns into a tourist theme park, where the eventual level of tourists depends on initial conditions. As it could be expected, this is due to the fact that the relevance threshold for tourists is very low with respect to that for residents, and consequently, tourists take over local services and resources expelling the residents. However, for very low initial levels of tourists and high enough levels of residents, there are also trajectories where residents take over the city, letting tourists go extinct or remain present at very low levels. As the relevance threshold ȳ grows to 1.1 (yellow region in Fig. 2b), making access to resources more demanding for tourists, we witness the emergence of a stable attractor where residents and tourists stably coexist in the long term, approaching this state through a cyclical adjustment path, whose basin of attraction is delimited by the yellow region. Outside this basin, depending on the initial level of tourists vs. residents we find as before that either tourists take over entirely, or residents do, entirely or partially (that is, with a more or less high level of tourists observed in the long term). As ȳ is brought further up at 1.3151 (indigo region), the stationary state becomes unstable and cyclical behaviors emerge within the yellow region, whereas outside the region one still observes as before, depending on initial conditions, the eventual takeover of tourists or the emergence of a state with high levels of residents and some tourists. Finally, with ȳ at 1.35 (orange region), the system is destabilized, the stationary state is unstable and the trajectories may entail big oscillations where, despite that both the noresidents and prevailing-residents long-term states can materialize as before, it is also possible that the limit state is reached through expanding fluctuations. In particular, it is interesting to observe that as the conditions for accessibility of resources for tourists become more demanding as ȳ increases, the resulting dynamic behaviors do not simply favor residents-rather, what we observe is an increase of the system's dynamic variability with the eventual emergence of cyclically diverging behaviors where big changes in the levels of residents vs. tourists are observed in time.
In Fig. 4 we highlight a different phenomenon, namely how the size of the basin of attraction of the stable stationary state varies with the variation of ȳ for a given value of x. We now fix x = 2.2 and choose the values of ȳ in order to always remain within the yellow region of Fig. 2b where a stable stationary state (attractor) exists. As we see, in Fig. 4a, as ȳ increases, the size of the basin of attraction of the stable stationary state (denoted with a black dot) significantly increases. In Fig. 4b, we analogously set ȳ at a constant value 1.2 and let x vary. In this case, as with the increase in x access to resources becomes less and less easy for residents, the size of the basin of attraction of the stable stationary state gradually shrinks. Maintaining a viable access to resources for residents therefore causes, as one might expect, a dynamic stabilization of the system.
Figure 5 reports yet another angle of analysis, namely, how the coordinates of the stationary state vary with ȳ for a given level of x and of e. The other parameters are still kept at the usual values. We see that, as ȳ increases, the stationary state entails smaller equilibrium levels of both tourists and residents. However, for a given ȳ, increases in x imply lower levels of tourists at the stationary state. This pattern of course only informs us about the composition of the stationary state but not about its stability properties or, if attractive, about the size of its basin of attraction. In Fig. 5b, as it could be expected, as c grows, we see that the stationary state entails lower and lower levels of tourism all other things being equal. Beyond a certain threshold for c, the steady state level of tourists keeps declining even when e increases, whereas below the threshold an increase in e causes a corresponding increase of the level of tourists at the steady state.
We have checked the robustness of our simulation results through further, extensive numerical tests that are not reported here for brevity and which confirm our analysis.
---
Discussion and conclusions
We have built a simple model to study the conditions for the emergence of overtourism through mathematical simulation of a predator-prey-inspired dynamical system. The core element that drives our dynamics is the competition for the accessibility of resources and services between residents and tourists, a feature that is typical of overtourism and is mainly responsible for its most disruptive effects. The model has been further enriched with a few elements that capture effects such as tourist congestion or the experience value for tourists deriving from the interaction with residents or from the intrinsic attractiveness of the city. Even if studying the model in its most essential form, the dynamic analysis is challenging.
Our model shows that, under suitable conditions, overtourism may emerge, to the point of causing a full 'Disneyfication' of the city with the eventual extinction of all residents and its final transformation into a tourist theme park. However, also the reverse option is possible, with tourists disappearing from the city or reaching a stable level without taking over the local economy. Of course, in addition to these extreme cases, the possibility of a stable coexistence of residents and tourists is also possible, but equally possible are more complex dynamics that may entail stable cyclical oscillations or wide variations in the relative levels of the populations of residents and tourists. The outcome that is eventually reached depends on a very complex constellation of parameters, each of which plays a specific role that can, however, be fully understood only by means of a thorough analysis.
What we have learnt from this study is that, in a nonlinear setting, acting on specific parameter values may cause counterintuitive effects. As we have seen, some cities have decided to tackle overtourism by restricting tourists' access to local services and resources. In our model, this basically amounts to raising the relevance threshold of tourism as it makes the conditions for access to tourist-specific resources more demanding. However, this does not necessarily entail the eventual reduction of the number of tourists or even the reaching of a stable stationary state where the number of tourists is under control. It may happen instead that the main effect of raising the threshold ȳ is destabilizing the system, for instance by causing the emergence of large oscillations in the levels of residents and tourists. This means that, contrary to commonsense approaches, it is important to understand how certain measures affect the whole structural organization of the local economy. The interplay with factors such as congestion, intrinsic attractiveness, or experience value can generate complex dynamic effects that influence the existence and stability of stationary states, and more generally the dynamic behavior of the system.
It is interesting to notice that, in determining the existence and stability properties of the stationary states of the model, certain composite parameters play a more substantial role than others. In particular, residents' susceptibility (B) is the key parameter in determining the dynamic regime that prevails, whereas residents' resilience ( A) and tourists' susceptibility (C) play practically no role, although it is far from excluded that they may play a role in the dynamic behavior of the system far from equilibrium. The central point seems therefore to be the conditions for access to local services and resources by residents. Promoting residents' access does not merely amount to restricting access to the same resources to tourists. Lowering residents' susceptibility might be a better strategy and also a source of stabilization of the system. This goal may be reached, for instance, by providing better social and welfare services to residents, by supporting social entrepreneurship that better addresses critical local needs, by improving the quality of key resident services such as kindergartens or retirement houses, and so on. What is important to stress is that, in a nonlinear system, even relatively small changes may make a big difference, for better or for worse. And therefore, building models that allow to estimate the likely impact of policy measures as an essential support tool for public decision making becomes crucial.
It is unlikely that overtourism will be successfully dealt with by cities through the implementation of occasional measures without a clear evidence-based strategy that is informed by a solid knowledge of the underlying system of structural interdependencies, not unlike what happens in the management of ecological systems. Our study has clear limitations, due to the extreme simplicity of the model that disregards many potentially relevant factors. In particular, the role of residents' and tourists' expectations and attitudes, that as we have seen is an important aspect in the current evaluation of the social and economic impacts of tourism, could also be modeled with all the ensuing complexities arising from cultural transmission effects, misperceptions and biases, manipulation of consensus, and so on. Another important limitation is that an empirical estimation of the values of the model parameters is not simple and would call for a sophisticated nonlinear econometric analysis. Data availability is also demanding, as, ideally, very long time series of residents/tourists populations of cities with significant or potential overtourism issues would be required. The nonlinearity features of the model would imply that even relatively small estimation errors might have big consequences on the projected dynamics, yielding potentially misleading indications. The present paper has therefore mainly a conceptual value in drawing attention upon the dynamic complexity of the socioeconomic dynamics of overtourism, and the ensuing necessity to carefully assess the long-term effects of policy changes even when they intuitively seem to respond effectively to outstanding issues. Curbing tourist congestion through the reduction of commercial licenses for tourism-related businesses, for instance, looks like an appealing solution but its long-term consequences might be more complex than one could expect, depending on the overall structure of the local economy and its 'ecosystemic' interdependencies. In its current form, our model is not tailored to guiding policy design choices, a task that requires suitably calibrated empirical models. But we hope that this first study may inspire further, more sophisticated analyses that will serve in turn as a guide for the construction of policy oriented tools. We look forward to this promising perspective. we obtain ( ȳH ) = 91.73884257 > 0, and therefore, condition 2 holds.
Finally, at ȳ = ȳH , we have dT ( ȳ) d ȳ = -12.92610372 = 0, so that a Hopf bifurcation occurs at the parameter value ȳH = 1.309562086.
---
Existence and stability of stationary states
In order to study the existence of the stationary states, we rewrite the system (1)-( 2 (5) so, the isoclines (i.e., g(x) = 0 and h(x) = 0) of the dynamical system become:
y = g(x) := A x (x -B) (6) y = h(x) := C x -D x -E . (7
)
It is easy to check that the above functions are two hyperbolas with the following properties:
i. the function y = g(x) (Fig. 6a,b) presents an horizontal asymptote at y = A, a vertical one at x = 0 and its graph crosses the x-axis at x = B (sign(B) = sign(c xr )); ii. the function y = h(x) (Fig. 6c,d) presents an horizontal asymptote at y = C, a vertical one at x = E and its graph crosses the x-axis at x = D. Remark 1 The graphs of g(x) and h(x) can have at most two intersection points and, therefore, at most two stationary states exist. Furthermore, under the assumption that all parameters of the system (1)-( 2) are strictly positive, it is easy to check that the inequality A > B is always satisfied.
Overlapping the pairs of Fig. 6a-c, a-d and b-d, we obtain all possible intersections between the two isoclines as shown in Fig. 7a-g. This proves the claim about the existence of the stationary states of Proposition 1.
In order to study the stability properties of the stationary states, we compute the Jacobian matrix of the system (4)-( 5), evaluated at the stationary state P * :
J (P * ) = b(A -y * ) -bx * f E (y * -C) f E (x * -E) (8)
We know that the signs of the determinant D(J (P * of the matrix (8) give us the stability properties of the stationary state. In particular, if D(J (P * )) < 0, then the stationary state is a saddle point; if D(J (P * )) > 0 and T(J (P * )) > 0(< 0), the stationary state is a repeller (an attractor) point. We prove the result for the sub-regime 0 < B < E < D (see claim (1.b) in Proposition 1) shown in Fig. 7d. The other claims for the other sub-regimes can be proven in the same way.
We observe that the slopes of curves G(x, y) = 0 and H (x, y) = 0 are given by: m G (x, y) = -y -A x m H (x, y) = -y -C
x -E at any given stationary state P * .
In this respect, we rewrite the determinant as D(J (P * )) = b f E x * (x * -E) (m G (x * , y * )m H (x * , y * )) Since y * -A < 0, y * -C > 0, x * -E < 0, the stability analysis can be developed as follows: i. At the stationary state P 1 , the curves G = 0 and H = 0 are both increasing and the slope of G = 0 is greater than that of H = 0. Then, the determinant D(J (P 1 )) is strictly negative and the stationary state is a saddle. ii. At the stationary state P 2 , the curves G = 0 and H = 0 are both increasing and the slope of H = 0 is greater than that of G = 0. Then, the determinant D(J (P 2 )) is strictly positive and the stationary state is either a repeller or an attractor depending on the sign of the trace T(J (P 2 )).
---
Appendix
---
Hopf bifurcation
The Jacobian matrix of the system (1)-(2) evaluated at a stationary state P * = (x * , y * ) is given by: Liu (1994) derived a criterion to prove the existence of a Hopf bifurcation without using the eigenvalues of the matrix J (P * ). According to Liu's criterion, if the stationary state P * depends smoothly upon a parameter p ∈ (0, p), and there exists a parameter value p H ∈ (0, p) such that the characteristic equation of J (P * ), λ 2 + T ( p)λ + ( p) = 0, satisfies the conditions:
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Background: We aimed to determine the factors that increase the risk of HRFB in Bangladeshi women of reproductive age 15-49 years.The study utilised the latest Bangladesh Demographic and Health Survey (BDHS) 2017-18 dataset. The Pearson's chi-square test was performed to determine the relationships between the outcome and the independent variables, while multivariate logistic regression analysis was used to identify the potential determinants associated with HRFB. Results: Overall 67.7% women had HRFB among them 45.6% were at single risk and 22.1% were at multiple highrisks. Women's age (35-49 years: AOR = 6.42 95% CI 3.95-10.42), who were Muslims(AOR = 5.52, 95% CI 2.25-13.52), having normal childbirth (AOR = 1.47, 95% CI 1.22-1.69), having unwanted pregnancy (AOR = 10.79, 95% CI 5.67-18.64) and not using any contraceptive methods (AOR = 1.37, 95% CI 1.24-1.81) were significantly associated with increasing risk of having HRFB. Alternatively, women and their partners' higher education were associated with reducing HRFB.A significant proportion of Bangladeshi women had high-risk fertility behaviour which is quite alarming. Therefore, the public health policy makers in Bangladesh should emphasis on this issue and design appropriate interventions to reduce the maternal HRFB. | Background
Women's high-risk fertility behaviour (HRFB), which is defined by "narrow birth intervals, high birth order, and younger maternal age at birth, have been associated with negative health outcomes for both the mother and the child" [1,2]. Maternal HRFB is a bio-demographic risk factor that impedes the achievement of lower maternal and child morbidity and mortality [3][4][5][6][7]. Some demographic variables, such as women's age, parity, and birth spacing are the crucial parameters of measuring HRBF including too-early (< 18 years) or too-late (> 34 years) childbearing, short birth intervals (< 24 months) and a higher number of live births (4 or higher) [3,4,7,8]. Although the total fertility rate (TFR) of Bangladesh declined from 3.7 in 1995 to 2.04 in 2020 [9]. Remarkably the rate of teenage pregnancy is about 35% and 15•1% gave birth less than 24 months interval in Bangladesh. Comparing with many developing countries Bangladesh has the highest rates of adolescent fertility with 82 births per 1000 women as of 2019 where over 50 percent of adolescents gave birth between the years 15-19 [10].
Several studies identified that early or late motherhood is associated with hypertension, premature labor, anemia, gestational diabetes, diabetes, obesity, pregnancy related complications, higher rates of caesarean and operative deliveries and unsafe abortions [11,12]. Childbearing at an early age (< 18 years) is connected to a growing risk of intrauterine growth restriction, child undernutrition, preterm birth, and infant mortality. On the other hand, late motherhood (> 34 years) is related to preterm births, intrauterine growth restriction, stillbirths, amniotic fluid embolism, chromosomal abnormalities and lowbirth-weight newborns [12,13]. HRFB in mothers also associated with the neonatal mortality; while a study in India identified causal effect of birth spacing on neonatal mortality [14], and also childbearing at teenage was also found to be linked to neonatal mortality [15]. Some previous studies established a relationship between numerous HRFB-related parameters and their detrimental effects on maternal and infant health [7,8,16,17]. Women who start having children at an early age often have more children [18] and this is also associated with adverse maternal, infant and child health outcomes [19]. One the other hand, short birth intervals (< 24 months) [20] and higher birth order [21] may also aggravate the infant and child mortality. Although such evidence supports the consideration of different exposures to high-risk fertility behaviors as a high-priority maternal and child health concern, very few studies in Bangladesh have evaluated factors related to HRFB in women of reproductive age. Therefore, in order to develop effective prevention programs for the region, a clear understanding of the determinants and potential risk factors for maternal high-risk fertility behavior among Bangladeshi women is required. There is, however, a dearth of literature examining the risk factors for HRFB in Bangladesh. To date, most of the studies on HRFB in Bangladesh focused on identifying the relationship between HRFB in women, and maternal and child health outcome [7,17,22]. Based on these considerations, this study aimed to identify the associated factors of HRFB in women. Identifying such determinants will be crucial for formulating evidence-based programs in Bangladesh especially targeting the significant risk factors.
---
Methods
---
Data sources
The study relied on data from the Bangladesh Demographic and Health Survey 2017-18. The National Institute of Population Research and Training (NIPORT) of the Ministry of Health and Family Welfare of Bangladesh used a two-stage stratified sampling approach to conduct this cross-sectional study. The outcomes of our study were assessed using a total sample of 7757 women (age 15 to 49). The study included ever-married women aged 15-49 who were not pregnant currently and had at least one child before the survey. Unmarried and pregnant mothers with incomplete BMI information were excluded as the sample. The description about the data collection procedures and sampling frame are detailed in the original (BDHS 2017-18) report [23].
---
Outcome variable
The outcome variable for this study took into account maternal "high-risk fertility behaviour" developed using the definition of the BDHS [23]. The study considered three variables to define the high-risk-fertility behaviour: (a) maternal age at the time of delivery, (b) birth order, and (c) birth interval. The presence of any of the following conditions was termed as a single high-risk fertility behaviour: (i) mothers age less than 18 years at the time practicing Islam as core religion, age above 35 years, having normal childbirth, having above 3 children, having unwanted pregnancies and not using birth control methods were at increased risk of having HRFB. As a result of the study's findings, interventions are urgently needed to prevent high-risk fertility behaviour among Bangladeshi women aged 15 to 49 years.
of childbirth (ii) mothers age over 34 years at the time of childbirth (iii) latest child born less than 24 months after the previous birth; and (iv) latest child's birth order 3 or higher. Multiple high-risk categories are made up of two or more aforesaid conditions. High-risk fertility behaviour was defined as the presence of any of the four conditions listed above (coded as 1 and otherwise 0) for final analysis.
---
Independent variables
The researchers reviewed the most recent relevant articles to determine the independent variables. The selected sociodemographic and economic variables (independent) included in the analysis are: place of residence (urban and rural), administrative division (Barishal, Chottogram, Dhaka, Khulna, Mymensingh, Rajshahi, Rangpur, Sylhet), religion (Islam, Hindu and other), age (15-24, 25-34 and 35-49 years), age at marriage (< 18 and ≥ 18 years), education (no education, primary, secondary and higher), access to television (no and yes), body mass index (according to WHO [24]; underweight: < 18.50 kg/m 2 , normal: 18.50-24.99 kg/m 2 , overweight/obese: ≥ 25.00 kg/m 2 ), current working status (currently working and not working), partner's education (no education, primary, secondary, higher); partner's occupation (agricultural, business, non-agricultural, other). Reproductive factors: birth order (1-2, > 3), antenatal care (ANC) seeking (no, yes) and current use of contraceptive methods (yes, no), types of childbirth (normal, caesarean), place of childbirth (home, facility birth), pregnancy wanted (then, later, no more).
---
Statistical analysis
The frequency and percentage of the selected attributes were determined using descriptive statistics. The Pearson's chi-square test was performed to show the association between the outcome variables and the specified independent variables at the bivariate level. Finally, the factors related to "high-risk fertility behaviour" were determined using a logistic regression analysis with significant components (p-values < 0.05) at the multivariate model. These analyses included both unadjusted odds ratios (UORs) and adjusted odds ratios (AORs) along with 95% confidence intervals (CIs). Multicollinearity among covariates was checked for all models using variance inflation factors (VIFs), which were determined to be the modest with VIF ≤ 2 for all covariates. Statistical package for social sciences (SPSS. 25.0) was used to conduct all statistical analyses.
---
Ethical consideration
DHS data is available in the public domain and is freely available to anyone who makes a reasonable request. The entire study protocol was approved by the Bangladesh Ethics Committee and ICF International; thus, we did not need any additional ethical approval. The BDHS 2017-18 report contains details about the ethical approval [23].
---
Results
---
Background characteristics and prevalence of HRFB
The final study included 7757 women who had given birth within the previous five years. The median (IQR) age of the respondents was 25.0 years (25.0-75.0). More than half (56.8%) of the women aged 15 to 24 years. Most women (71.6%) lived in rural areas, and overwhelmingly large number (90.5%) of them were Muslims. Over half of the women finished secondary education and 62.8% were unemployed (Table 1).
The worst situation was found in the rural areas for both the single and multiple HRFB. About 46.7% of respondents from rural areas had single HRFB compared to 11.7% from urban areas. Similarly, 24.5% of women from rural areas were at multiple HRFB, which was only 4.4% among women from urban areas (Fig. 1). Figure 2 demonstrates the prevalence of HRFB across different administrative divisions of Bangladesh. The highest prevalence of single risk fertility behaviour was found in Dhaka (10.5%), followed by Chottogram division (9.7%). However, the highest rate of multiple HRFB was found in Chottogram division.
---
Reproductive characteristics and high-risk fertility behaviour
Most women (63.8%) have had a recent normal childbirth and 54.3% have given birth at a healthcare center. Of the total mothers, a significant portion (91.9%) completed ANC follow-up for their recent pregnancy (Table 2).
---
Factors associated with high-risk fertility behaviour
Both univariate and multivariate logistic regression models were used to identify potential risk variables, however because this model was controlled for confounding effects of covariates, we only used the adjusted results to interpret the findings. The women who were Muslims having higher risk of Fertility behavior (Adjusted Odds Ratio [AOR] = 5.52, 95% Confidence Interval [CI] 2.25-13.52, p < 0.001) than that of other religion. HRFB was found 19% less common in younger women (15-24 years; AOR = 0.19, 95% CI 0.10-0.30, p < 0.001) and 6.42 times more likely in women over 35 years (AOR = 6.42 95% CI 3.95-10.42, p < 0.001). Women who had normal childbirths possessed higher HRFB (AOR = 1.47, 95% CI 1.22-1.69, p = 0.003) compared to those who had a caesarean section. Women who had unwanted pregnancies were 10.79 times more likely to have high-risk fertility than women whose pregnancies were desired (AOR = 10.79, 95% CI 5.67-18.64, p < 0.001). Women who did not presently use contraceptive methods were 1.37 times more likely to have HRFB compared to their counterparts (AOR = 1.37, 95% CI 1.24-1.81, p < 0.001). The odds of HRFB disproportionately distributed across the divisional regions. On the other hand, women aged 25 to 34, having secondary and higher education level; partner's higher-level education reduced the odds of high HRFB (Table 3).
---
Discussion
This study showed that 67.7% of women had HRFB, of which 45.6% were in single high-risk category and 22.1% women have had multiple high-risk categories. This high prevalence rate demonstrates that HRFB are all too common in Bangladesh, potentially endangering the health of the country's women. We found that women who were Muslims, age above 35 years, having normal childbirth, having low literacy level, having unwanted pregnancies, not using birth control methods were at increased risk of having HRFB. When compared to women who have never had any formal education, those with a higher level of education had a lower likelihood of high-risk fertility behaviour. This result was supported by the previously conducted studies [22,[25][26][27]. The reason for this could be having no formal education impacts on work status and leads to lower income and independence all of which affect In this study, visiting ANC was found to be facilitating factor for reducing the odds of high HRFB. This is probably due to the fact that antenatal care provides opportunities to reach pregnant women with a variety of interventions that may be essential to their health and well-being [28,29], thus they were more likely to receive information regarding importance of routine check-up, maternal nutrition, delivery complications and risk of having HRFB. On the other hand, women, who did not have ANC follow-ups for their recent children, had more probability to engage in risky reproductive behaviours. Family planning for extending the time between births was discussed during postnatal care counseling. As a result, decreased ANC seeking during pregnancy may have a role in HRFB.
Another important finding from this study is women who had a history of caesarean delivery were less likely to have high-risk fertility behavior. There are some other studies related to the association between type of delivery and subsequent fertility [30,31] which have similar results. The reason behind this may be women who have their babies by cesarean section were less likely to have more children than women who have their babies vaginally and also cesarean section delivery was followed by a higher likelihood of actively contraception after that birth, which may lead to low odds of HRFB.
This study revealed that, HRFB was more likely to occur among women who had never taken contraception compared to those who used which is in line with previously did studies elsewhere [32,33]. One of the goals of contraception is to increase the birth interval and reduce unplanned pregnancies. Women who had unwanted pregnancies were more likely to engage in high-risk reproductive behaviour than those who had previous desired pregnancies. It may be the result of not using contraceptive methods by the women who experienced unwanted pregnancies. This result also corroborates with a study conducted in Nigeria [25].
Moreover, religious belief also did affect maternal HRFB. Our study revealed that the women who were Muslims, have increased odds of HRFB compared with other religious believers. This finding was in line with an Indian study [34], where the author argued that Muslim women are less willing to use contraceptive methods, family planning and they prefer temporary methods over sterilisation, these could be plausible reasons why Muslim women in Bangladesh were at higher risk of having HRFB.
Evidence suggests that maternal age of 35-49 have the higher odds of HRFB than their counterparts. Similar result was found in other study where the author concluded that pregnancy at later stage is associated with significant increases in maternal risks and complications [35,36] which leads to adverse outcome for both the mother and the child.
Furthermore, high-risk fertility behaviours were found more than double among women in Rangpur, a northern region in Bangladesh, compared to the women who live in Sylhet. This is probably due to the fact that women in remote locations may stay behind in terms of utilizing reproductive health services, such as ANC, poor family planning adopted rates related to religious beliefs and community attitudes, as well as having poor literacy levels. However, this inequity in utilizing reproductive health facilities among different regions in Bangladesh should be minimize to reduce the odds of HRFB. This analysis may lead to important inferences that may help to lower maternal high-risk fertility behaviour and can be useful and relevant in areas where HRFB is ubiquitous. The strengths and limitations of this study have been well-recognised. The study employed the recently published BDHS 2017-18 data which had a large country representative sample size, allowing the findings to be more generalisable. Moreover, appropriate statistical technique applied in the analysis can be used to find probable components and their relationships. However, the study has some limitations. For instance, due to cross-sectional data, outcomes and predictors variables were collected at a point of time; therefore, causality cannot be established. In addition, some important factors, such as dietary factors, physical activity and maternal comorbidity histories are not taken into consideration due to unavailability in the original dataset, but these factors may have been associated with HRFB.
---
Conclusions
This study highlighted the pervasiveness of maternal high-risk fertility behaviour among Bangladeshi reproductive aged women. Several significant protective factors, such as maternal and partners' higher education were associated with lower HRFB. In contrast, being Muslims, age 35 to 49 years, having normal childbirth, having unwanted pregnancies, and not using any birth control tools may increase risk of having HRFB for women. Thus, findings from the study identify the need to develop an intervention; especially focusing on Bangladeshi Muslim women aged 35-49 years to reduce highrisk fertility behaviour. Furthermore, the government of Bangladesh and stakeholders (e.g., NGOs, INGOs) should work jointly to prevent early marriage of women and to enhance awareness and proper education to reduce the high-risk fertility behaviour.
---
Availability of data and materials
This study used publicly available Demographic and Health Surveys Program datasets from Bangladesh which can be freely obtained from https:// dhspr ogram. com/. As a third-party user we don't have permission to share the data publicly in any platforms.
---
Authors' contributions
MHH and MAR conceptualised the research idea and study design. MAR explored the data and performed analysis with the guidelines of MHH. MHH, SK, HRH checked and validated the results. MHH, MAR, HOR drafted the manuscript with the support from MHH. SK, HRH, SKC critically reviewed the manuscript for scientific coherence. MAR supervised the whole study. All authors read and approved the final manuscript.
---
Declarations Ethics approval and consent to participate
The current study involved analyzing secondary data, which is publicly accessible at www. dhspr ogram. com and free of cost upon appropriate application. The ICF Institutional Review Board and Ethical Review Board of Ministry of Health approved the data collection and survey process. Therefore, further ethical approval was not needed. The current study relied on publicly available data sources that had already been ethically approved for the primary investigations, so no additional ethical approval was required.
---
Consent for publication
Not applicable.
---
Competing interests
None of the authors declares any conflict of interest.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
This study focuses on users' practices involved in creating and maintaining Facebook memorial Pages by adapting the theoretical perspective of the social capital approach. It examines 18 Pages in Israel, which are dedicated to ordinary people who died in nonordinary circumstances. We employ qualitative analysis based on a digital ethnography conducted between 2018 and 2021. Our findings show how memorial Pages serve as social capital resources for admin users. Admins negotiate Facebook affordances when creating, designing, and maintaining such Pages. They discursively position the deceased as a respectable public figure worth remembering and their followers, who are otherwise strangers, as vital partners in this process. The resources followers provide range from economic capital and practical support to solidarity and emotional support. Finally, we point at the perceived connection users make between visible/measurable online engagement (Like, Share, Follow), and cognitive or emotive implications-public memory, recognition, and esteem. | Introduction
Over the last two decades, online practices of death, mourning, and memorialization have grown into a vibrant field of interest and research. Studying the intersection of death and digital media sheds light on novel commemorative practices, affective performances, and oscillations between personal and public spheres. As such, it contributes to our understanding of social practices of remembrance, underlined by the questions: Who is worthy of remembrance, and why and how are they remembered?
A major site of online mourning and memorialization is the social networking site Facebook. The oscillations between personal and public spheres are integrated into Facebook's internal logic and infrastructure. As a multifunctional platform, Facebook brings together several distinct communication channels, also known as "subplatforms" (Navon & Noy, 2021). Facebook's three main subplatforms are Profiles, Groups, and Pages. Profiles are personal accounts that represent the user as an individual; Groups allow interaction of several users, usually around a specific shared interest; and Pages serve as public channels, allowing a more unidirectional communication with broad audiences.
The official aim of Pages is to serve businesses, communities, organizations, and public figures who seek to increase their digital presence and connect with audiences and fans. It is an essentially public subplatform that is visible to anyone on Facebook by default (as opposed to Profiles or Groups) and may have an unlimited number of followers. Pages' administrators (admins) manage the interaction with the followers and the content on the Page. Interestingly, users commonly employ Pages in a memorial capacity and create Pages to memorialize and publicize ordinary people.
In this article, we look at memorial Pages that are dedicated to ordinary people who died in nonordinary circumstances (terror attacks, murder, suicide, etc.). Our data consist of 18 cases, all of whom are Israeli. The aim of this study is to examine the practices involved in creating and maintaining memorial Pages from the theoretical perspective of the social capital approach. We explore how creators of memorial Pages view the role of the Page, their motivations, and their relations with different audiences, such as strangers (Marwick & Ellison, 2012). We further analyze how admins interact with their network of followers and the various resources Associate editor: Katy Pearce they accumulate through this process-from economic capital and practical support to solidarity and emotional support.
---
Literature review
---
Online mourning and memorialization
Online practices of remembrance and memorialization emerged in the mid-1990s, initially in the form of virtual cemeteries and private Web memorials (Roberts, 1999). This "first generation" of digital practices, as Walter (2015) terms it, "changed surprisingly little" compared to earlier offline practices (p. 10). It was only in the early to mid-2000s, with the rise of social media, that things have significantly changed.
Social media, and social network sites (SNSs) in particular, afford new means for grieving and commemorating, and influence the experience of death both online and offline. Brubaker et al. (2013) identified three expansions of death and mourning that SNSs afford and facilitate: a spatial expansion in which physical barriers to participation are dissolved; a temporal expansion that refers to the immediacy of information enabled by SNSs; and a social expansion that results in a context collapse and the inclusion of the deceased within the social space of mourning (see also Marwick & Ellison, 2012).
The intense social nature of social media is shaped by its inherent features of sharing, performance, and interaction. Sharing can be appreciated through different logics, while on SNSs the primary logic is communicative and not distributive (when someone shares her feelings or belief, she is not left with less). On SNSs, sharing is telling, where "fuzzy objects of sharing" are nonetheless associated with giving and caring (John, 2013). These sharing-telling practices involve multiple modes and strategies of self-presentation, identity negotiation, and performance (Papacharissi, 2010), which inevitably lead to intensified engagement and participation. This is true also of mourning and memorialization contexts, where engagement and participation can be viewed as a demonstration of communality and social support (Do ¨veling, 2015;Walter, 2015). Alternatively, engagement and participation may also indicate social pressure and competition over who has the most significant contributions or the right to portray the deceased (Carroll & Landry, 2010;Marwick & Ellison, 2012;Walter, 2015). Social dynamics and engagements are complex, determined in part by the specific media (sub)platform and its affordances.
Studies of mourning and memorialization examine various social media platforms, including MySpace (Carroll & Landry, 2010), YouTube (Harju, 2015), Instagram (Gibbs et al., 2015), Twitter (Cesare &Branstad, 2018), andTikTok (Eriksson Krutro ¨k, 2021). However, the most dominant platform, both in terms of research and of user practices, is Facebook. The vast number of dead users along with the various practices and rituals that living users perform qualifies Facebook as a "current center of gravity" for the discussion of online mourning and memorialization (Moreman & Lewis, 2014, p. 4).
---
Mourning and memorialization on Facebook
Studies of mourning and memorialization on Facebook point at multiple practices and uses. Such practices may chronologically commence with death announcements and the posting of information about memorial services (Babis, 2021;Carroll & Landry, 2010, respectively), to subsequent and more continuous practices, such as visiting the deceased's Profile and posting messages as a way to commemorate, express emotion, and remember special occasions (Pennington, 2013;Moyer & Enck, 2020, respectively).
An additional common practice, which lies at the heart of our current study, is the creation of memorial Pages. These Pages may be dedicated to individual subjects, groups of people, animals, and things such as places (Forman et al., 2012;Kern et al., 2013). They enable "para-social copresence" and continuing bonds (Irwin, 2015), as well as public presence of the deceased and engagement with strangers (Kern et al., 2013;Kern & Gil-Egui, 2017). Rossetto et al. (2015) point to three themes or functions that mourning and memorialization on Facebook possess: news dissemination, preservation, and community. News dissemination describes sharing or learning information about a death through Facebook. Preservation refers to the continued presence of the deceased and maintaining communication and connection with them. Lastly, the community theme refers to the connection and communication with people other than the dead. It includes connecting with other mourners, seeking and offering social support, and expressing one's feelings and thoughts, while at the same time facing a challenge to privacy.
One way to face this privacy challenge and negotiate boundaries is through Facebook subplatforms. In a study of mourning and memorialization practices across Facebook's subplatforms, Navon and Noy (2021) outline a spectrum that ranges from private to public and accordingly from a more personal sphere of mourning to a larger and more institutional sphere of memorialization. Located on one side of the spectrum, Profiles are characterized by expressive and emotive communication, hence turning with time into personal mourning logs on the bereaved's Profile and online mourning guestbooks on the deceased's Profile.
Located on the spectrum's other side, Pages possess a distinctly public quality and serve as online memorialization centers where the deceased becomes an icon and is portrayed in one dominant way. Finally, Groups are positioned inbetween, possess a hybrid nature, and combine selfexpression and emotional sharing along with more public aspects. This results in Groups affording the revival of onceprevalent bereaved communities (Navon & Noy, 2021).
The triadic spectrum we outlined corresponds with the three levels of social death that Refslund and Gotved (2015) have put together. First, the individual level focuses on the personal loss (Profiles); second, the community level revolves around an extended network: relatives, neighbors, colleagues, and other acquaintances of the deceased (Groups); and third, the cultural or public level (Pages), refers to the death of people not personally known. According to the authors, this level "generates memorial practices that relate to the way of death (e.g., murder and traffic) or how they were appreciated in life (e.g., celebrities)" (p. 5). Similarly, Walter (2015) suggests the concept of public mourning, pertaining to either high-status figures or to ordinary people who die in tragic circumstances. In this study, we examine the latter. We look at public memorial Pages created in memory of ordinary people that nonetheless generate public mourning.
Analyzing Facebook memorial Pages, Marwick and Ellison (2012) discuss the publicizing of the deceased in terms of impression management strategies and conflicts among users. They focus on context collapse, negotiation of visibility, and the four characteristics of social media: persistence, replicability, scalability, and searchability (see boyd, 2010). They conclude with a recommendation for future research that will employ qualitative methods to explore how creators of memorial Pages view the role of the Page, their motivations, and their view of different audiences, such as strangers (p. 398). Our current research does precisely that and seeks to provide answers to these questions. However, while Marwick and Ellison (2012;also Sabra, 2017) frame their investigation in terms of context collapse, we suggest viewing Facebook memorial Pages via the social capital approach. We focus on admins' (Page creators') practices, which result in the accumulation of social capital.
---
Social capital and social media
Defining the term social capital is challenging, in part because it has received multiple definitions during the last few decades. Kritsotakis and Gamarnikow (2004) observe that "defining social capital is rather problematic" (p. 43); Williams (2006) adds that it is a "contentious and slippery term" (p. 594), and Xu et al. (2021) conclude that it is an "encompassing yet elusive construct" (p. 362).
One of the influential formulations of social capital was proposed by Bourdieu (1986), as part of his conceptualization of different types of capital and related systems of exchange. For Bourdieu (1986), "the distribution of the different types and subtypes of capital at a given moment in time represents the immanent structure of the social world" (p. 242). In line with his practice-centered approach and his dialectic view of the structure-agency relations (like Giddens, 1984), Bourdieu puts much emphasis on the role of constant social interaction (micro) in maintaining social structures (macro). He accordingly sees social capital as "potential resources which are linked to possession of a durable network of more or less institutionalized relationships" (Bourdieu, 1986, p. 248).
Another oft-cited and productive definition emerges from Putnam's (2000) view of social networks from the perspective of political science and civic engagement. Putnam (2000) draws a distinction between two main types of social capital: "bridging" and "bonding." The first describes broader, more diverse, and inclusive relations, which are often more tentative, while the latter concerns more exclusive relations, which are less diverse and more cohesive. The two concepts echo Granovetter's famous (1973) observation concerning "weak ties" versus "strong ties" (which, again, are wittingly or not, goal-oriented).
In a related manner, Williams (2006) notes that strong ties supply a "getting by" type of network (e.g., family and close friends), while weak ties supply a "getting ahead" social environment (e.g., distant acquaintances, social movements). He further suggests that different types of social networks can predict different types of social capital. More recently, Xu et al. (2021) conclude that "social capital consists of both social networks and resources derived from social networks" (p. 363, emphasis in original). Hence, we now turn from describing network characteristics to describing measurements of their outcomes or resources. Williams (2006) operationalizes measures of assessing social capital outcomes by addressing the two types of social capital Putnam (2000) discerned. As per bridging social capital, he developed a questionnaire based on several criteria, one of which is contact with a broad range of people; as per bonding social capital, he builds on several dimensions, including emotional support and the ability to mobilize solidarity. Xu et al. (2021) found that network features, specifically tie strength and communication diversity, result in different levels of emotional, practical, and informational support.
Theories of social capital have been studied extensively in relation to social media, so much so that they are recognized as a leading area of interest in the field (Stoycheff et al., 2017). One stream of scholarship has explored the effect of social media affordances on social capital outcomes. The term "socio-technical capital" (Resnick, 2002), captures these relations, arguing that individual users enjoy a greater ability to accrue social capital in the age of social media, as it becomes easier to maintain and create new connections.
Indeed, studies have found a positive association between the usage of SNSs and perceived access to social capital resources (Ellison & Vitak, 2015). Ellison and Vitak (2015) observe that recent studies further examined the "specific kinds of activities that are predictive of social capital" (p. 210, emphasis added) and not only general measures of use. They point to two main factors that appear to be most significant to social capital gain: the size and composition of the network and how users communicate with that network, that is, patterns of interaction. They stress that, "social capital is derived from interactions with one's network" (p. 210, emphasis in original).
In this article, we view social capital as potential resources that are produced through interactions in a structured social network. These resources may possess bonding or bridging social capital qualities, including emotional, practical, and informational support (Bourdieu, 1986;Putnam 2000;Xu et al., 2021). Rather than looking separately at resources or network characteristics, our focus is on social capital processes, that is the relations between the social network and the outcomes or resources that emerge from it. While some existing literature examines these relations and processes, we add a third element, namely the social network platform. This concerns how specific affordances enable and motivate social capital processes, and how users utilize affordances to position themselves and others in ways that encourage accumulation. Positioning is constitutive of social capital processes (Basu et al., 2017), and in line with our platform-centered approach, we take it to include users' practices of discursive positioning and the positioning that the platform itself performs. Within this framework we look at Pages' affordances (Page category, About, followers' count, etc.) as well as users' practices and activities, discourse, and patterns of interaction. Together, the findings provide fruitful insights into social capital processes, memorialization practices, and public remembrance on SNSs.
---
Method
---
Sampling
The research sample includes 18 Facebook Pages, which we observed during three years. 1 All the Pages were created in memory of ordinary people who died in nonordinary circumstances. Typical examples include a woman who was murdered by her male partner, a high-school student who committed suicide as a result of cyberbullying, a female backpacker who died in a bus accident during a trip to Nepal, victims of terrorist attacks, and fallen soldiers (males and females). Table 1 presents key details of all the cases, including the cause of death. To stress, none of these commemorated individuals were public figures or known publicly. The cases include 12 men and 7 women (one case refers to the death of female and male spouses), ranging in age between 15 and 55, with an average of 25.6 (Table 2). The Pages were created between January 2011 and October 2016, and are all in Hebrew. All the translations are ours.
Data collection procedures employed Facebook's search bar (Marwick & Ellison, 2012;Navon & Noy, 2021). We looked for keywords and phrases related to death and memorialization while using Facebook's filter to specifically reach Pages (and not Groups or Profiles). Because the display of Facebook's search results is managed by unclear criteria (alphabetical order, date of creation, followers count, etc., see Kern et al., 2013;Kern & Gil-Egui, 2017;Navon & Noy, 2021), we conducted multiple searches, which led us to different lists of Pages. To further offset Facebook's unknown algorithmic preferences, we did not always sample the Pages from the top of the result list.
After collecting the data, we selected Pages for analysis based on the "intensity sampling" method. Intensity sampling focuses on the relevance of specific cases, their expected contribution to the research, and the extent to which they offer insights into our field of research (Suri, 2011). In order to strengthen the data's heterogeneity, we selected diverse cases in terms of age, gender, cause of death, socio-cultural background, etc. As indicated earlier, Pages are visible and available to anyone on Facebook, which made the work of accessing all the contents, posts, and comments on each Page relatively easy. Since we are particularly interested in the admins' roles and communicative practices, our analysis focuses on posts and not on comments. Still, the comments provided complementary material that enabled a better understanding of the larger picture, including the dynamics among users and between the users and the admins.
---
Analysis
Between June 2018 and March 2021, following the data collection phase, we conducted ethnographic fieldwork based on the principles of digital ethnography (Varis, 2016). Siding with Varis, we see ethnography not primarily as a data collection practice and not so much as a set of methods and techniques, but as an approach that "is methodologically flexible and adaptive: it does not confine itself to following specific procedures, but rather remains open to issues arising from the field" (Varis, 2016, p. 61). As such, we do not employ a prestructured qualitative analysis procedure (such as content analysis), but address discursive concepts such as positioning and participatory dynamics (Giaxoglou, 2015;Harre ´, 2015;Navon & Noy, 2021). Following Romakkaniemi et al. (2021), we link the frames of positioning and social capital, taking positioning as both a theoretical and methodological framework (p. 5). We identified and analyzed positioning levels (for instance, positioning of the deceased, of the Page, of the admins, and of the followers) and positioning strategies, which we refer to as techno-discursive practices (from choosing the most beneficial Page category to describing the deceased and the death story in a collective/heroic manner). We were sensitive to relations between actors as established through positioning, keeping in mind that the "way individuals are positioned in social structure can be an asset in itself, and social capital is conceptualized as that asset" (Basu et al., 2017, p. 782). Examining positions and positioning, we highlighted different roles, different acts of participation, and levels of engagement and commitment, which together amount to participatory dynamics between the network members (Navon & Noy, 2021). We applied these discursive concepts more closely to several dozen posts from each Page, from which the examples below are taken.
In terms of research ethics, we now turn to address the processes of accessing, analyzing, and representing data from these memorial Pages. In a scoping review of 40 empirical papers, Myles et al. (2019) aimed to situate ethics in online mourning research. They suggest that "terrain accessibility constitutes a determining factor" (p. 293) in ethical decisionmaking, including data anonymization. They further refer to the difference between a Facebook Page and a Facebook Profile in terms of "the nature of the online setting" (p. 292). Yet, they emphasize that ethical decisions should not rely on technological arguments and affordances, but rather on an actual ethical reflection. In line with Markham (2015), they invite researchers to think contextually about ethics and conclude that ethical judgments could only be made in context. In the context of the current study, we believe that the activity on the memorial Pages in our sample possesses a distinct public quality. Nevertheless, to ensure anonymity, we changed the names of the deceased/the memorial Pages, and since we translated all the quotes from Hebrew, they could not be located via search. Varis (2016) highlights the difference between early research of technologically mediated communication that centered around "things" or "texts" (collected randomly, detached from their social context) and later research that examined "actions" and situated practices. This shift builds on a new understanding of discourse as a socially contextualized activity. In this perspective, context and contextualization are critical issues that "should be investigated rather than assumed" (p. 57).
Varis suggests two contextual layers that digital ethnographies of communication need to investigate. The first is media affordances and the second is online-offline dynamics. We implemented these two layers as part of our analysis. As for the first layer, we pursued an ethnography of affordances, identifying and investigating different affordances that admins use when creating and maintaining a Page, and how features such as Likes, Shares, and Following shape discourse and dynamics on the Page. In line with Klastrup's (2015) and Kern and Gil-Egui's (2017) study of Facebook memorial Pages, we also examined the About section of each Page and analyzed the textual data therein.
The About section serves as the Page's visiting card. It displays its basic information, in part provided by Facebook and in part by the admins. This includes the current number of people who like and follow the Page, its category, contact info, and a short introductory text. Up to May 2020, during our data mining stage, the About section also included the Page creation date and, in most cases, a "Team Members" title which shows the admins. In the updated version, this information was removed. A new section called "Page Transparency" was added "in an effort to increase accountability and transparency of Pages" (Facebook Help Center, n.d.). Yet, the information provided in this new section is actually more limited when compared to the previous version. A "Page History" title presents the creation date; however, a new title "People Who Manage This Page" does not reveal names and Profiles as it used to in the past, but rather only the primary country location of the admins. This update reinforces previous observations about the vagueness and anonymity of Pages' administrators (Gro€mping & Sinpeng, 2018;Kern et al., 2013;Kern & Gil-Egui, 2017;Poell et al., 2016), and puts into question Facebook's declared efforts to increase transparency. Specifically examining memorial Pages, Marwick and Ellison (2012) likewise observed the difficulty "to ascertain who created the page and their motivations for doing so" (p. 388). In the current study, we try to answer these questions based on the Team Members data we collected before the Facebook update, a textual analysis of the About texts, and the ethnography we conducted.
As for the second layer, online-offline dynamics indeed turned out to be an important part of our analysis. The memorial Pages in our sample are all created in memory of actual people and the life they lived offline or their offline death story, as opposed to studies of memorial Pages that included Pages dedicated to fictional characters, places, or things (Forman et al., 2012;Kern et al., 2013). Moreover, the activity on these memorial Pages involves production, promotion, and documentation of a rich variety of offline events, as will be discussed shortly in the findings section. Below we discuss the Pages' names, their categories, About texts, admins, and followers' count along with the analysis of admins-followers interaction and the activity on the Pages.
---
Findings and discussion
---
Page name
The first finding we discuss corresponds with the first step of creating a Page: supplying the Page name. Typically (72% of the cases), the Page name consists of two verbal elements. The first element concerns one of the following phrases: "In memory of . . ." or "Remembering . . .," which is formative because it designates the meaning of the Page as a memorial site. The second element is the deceased's full name, which appears in all the cases. Supplying the deceased's full name contributes to a more formal and respectful tone. It accords well with cases in which the dead have served in the military or in a police unit, where their rank appears next to their name as part of the Page title ("In memory of ACOP Eytan Bar," or "In memory of Cpl. Hodaya Cohen"). In this vein, several titles include an English translation or the ending "the official Page of . . .," serving to establish a sense of formality, authority, and recognition of the Page and of the deceased. Supplying the deceased's full name may also suggest that Page creators do not expect or assume that all visitors know the deceased personally or beforehand. One way or another, a norm seems to be emerging in regard to naming memorial Pages, which is based on the evocation of one of the two phrases together with the deceased's full name.
---
Page category: community, public figure, interest
Right below the Page name, the Page category appears in grey and in smaller letters. A Page category "describes what type of business, organization or topic the Page represents" (Facebook help text). When creating a Page, Facebook affordances allow users to type freely in the Page category text box while receiving "help" from Facebook in the shape of prompting or introducing to the user existing categories according to the letters she types. These potential categories may seem like helpful suggestions, but in fact, the user must choose one of these pre-existing options. In other words, defining a category is a necessary step in creating a Page on Facebook, which can be done only according to a pregiven list that the platform provides.
The list of available categories that Facebook offers (as of July 2021) comprises over 1,500 possibilities, which include, for example, 12 types of Tour Agencies, 15 subcategories of Pet Services, 28 different types of Chinese Restaurants, and a similar number of Beverage shops (from Sake Bar to Tiki Bar). However, none of the categories or subcategories offered in the detailed list relates to memorialization, nor to synonyms or related terms (commemoration, remembrance, death, dying, mourning, grief, etc.). This raises interesting questions: How carefully does Facebook select, form, and shape Pages' categories and uses? How do users act within this framework of affordances? And more practically, which categories do users employ in memorial Pages when such elementary categories are missing altogether?
Our the Interest category includes Sports, Visual Arts, and the like. In the context of memorial Pages, however, the meaning of these categories is negotiated. They do not reflect the meaning Facebook provides, but instead the interpretations that users/admins ascribe in line with their goals: to engage interest in the Page, to create a large community that recognizes and remembers the deceased, and to turn her/him into a public figure. This finding demonstrates an interactional view of affordances as a relationship and negotiation between the interface and the user, rather than merely a property, or a feature, of the interface itself-an "entanglement of policy and practice," in the words of Arnold et al. (2018, p. 52). Users take the freedom to interpret Facebook categories creatively in order to contend with restrictions put forth by the platform. They might not have absolute freedom to choose the Page category, but they enjoy the freedom, which they exercise, to choose how to interpret and use it.
This finding also reveals an emerging norm concerning the socially accepted way of naming and categorizing memorial Pages. This norm hints at admins' underlying motivations for creating and maintaining a memorial Page (more on this below).
---
Admins' concealed identity
Four cases in our sample (22%) appeared with users (linked Profiles) as Team Members. In three of these four cases, the surname of the admin user was identical to that of the deceased, yet the specific kinship was unspecified. This relation has not been revealed in the About text as well, but a review of the posts along the Pages tells us that two are mothers of the deceased, one is a sister, and one is a cousin (hence the different surname). In two other cases (11%), the introductory text in the About section refers to the Page admins, though in an unspecified way: "The Page is moderated by the loving family," and "The Page is moderated by friends from Na'ariah [city] and by Nurit's brothers." Finally, in three additional cases (16.6%), we were able to deduce who runs the Page by looking at the posts over time, as the information was not provided in the About section (neither as Team Members nor in the introductory text). In one case, the admin is the deceased's daughter who often signs posts as "daddy's girl"; in a second case, it is the sister who frequently mentions her name, uploads photos of herself, and shares posts from her personal Profile on the Page. In the third case, family pictures appear frequently, and most of the posts end with the designation "the family," but no further information is provided about the specific admins. Overall, in all the nine cases we detailed above (50% of our sample), the admins are selfidentified as relatives of the deceased, and in the rest of the cases (50%), it is unclear who created or manages the Page. In other words, in most cases it is difficult to determine who the admins are (again, cf. Marwick and Ellison, 2012).
---
Admins' motivations and collective discursive positioning of the deceased
In more than half of the cases (61%), admins describe in the About section the reason(s) for which they have created the Page. The accounts they supply share similar motifs: "We opened this Page to keep the spirit of . . . alive," "This Page was created for the memory of . . .," and "This Page is in his memory and to inspire his legacy." In other words, the goal of the Page, as stated by the admins, is to have the deceased remembered publicly. More precisely, it is to make the deceased remembered and recognized by as many people as possible, beyond the circles of relatives and acquaintances who knew her when she was alive. In most of the cases (83%), admins use the About text as a space to write about the deceased and provide basic information that should presumably be known to acquaintances. For example, age at death, date and cause of death, a list of family members who are left behind, or a short biography. In line with the formal register, these brief biographies are often written in an informative and factual manner. Consider the following example: Such texts supply a brief overview of the deceased's life story, highlighting his exemplary military service and heroic death. The death story is charged with a deeper meaning relating to honor, sacrifice, patriotism, and recognition, that aims at transforming it from a personal death story to one that is collective and anchored in the public sphere. Hence the detailing of the (large) number of people who attended the funeral. Stressing the deceased's contribution to the state or society adds both moral and collective values to the act of remembrance, to those publicly and collectively engaging it, and thereby also to the memorial Page itself. This finding echoes Harju's (2015) observation of "a stance of moral superiority" (p. 130) that users construct in relation to public mourning of a celebrity on YouTube (Steve Jobs). The question at heart is a moral and sociocultural one, namely who is worthy of public remembrance?
According to Harre ´(2015), moral questions are integral to discursive positioning. The positioning theory claims that every thought, expression, and social action in and among groups "take place within shared systems of belief about moral standards," and about the distribution of roles, rights, and duties (p. 266). Similarly, Giaxoglou (2015) describes affective positioning as "semiotic and discursive practices whereby selves are located as participants . . . producing one another in terms of roles." (p. 56). Our findings point at heroic and sacrificial discursive positioning in all the cases in which the deceased served in the military or the police. For example: "Her death saved many lives," and ". . .Taking the shot in his own body, Yoni prevented multiple deaths." These and similar texts portray the ultimate sacrifice paid by the deceased, evoking a sense of patriotic gratitude (Noy, 2015).
In one case, in a Page dedicated to the memory of Shlomo Levi, the About text opens with this brief introduction: "Gal Levi-a son, brother, friend, worrier." Here the discursive positioning reflects a scale that ranges from the personal, through the familial (first "son," then "brother"), to the social and the institutional. In another case, even though the deceased's cause of death was suicide and he did not fall in the line of duty, his rank plays a salient role. On the About it says: "This Page intends to commemorate the legacy of the officer, the policeman, and the beloved person, Major General Eytan Bar." The admin of the Page is self-identified as the deceased's daughter, who regularly signs her posts with the words "daddy's girl," yet the focus lies with his public role and contribution. The goal is to form his memory as a respectable individual who has served the country and the society well.
Collective and often heroic discursive positioning also appears when the deceased was not a soldier nor held a formal institutional role. In these cases as well, admins highlight the social importance of the deceased or the death story and the collective values it embodies. Such is the case in the Page in memory of Talya Nadav. Talya died in a car accident abroad involving negligent DUI by two Israelis who avoided prosecution. The admin stresses the relevance of the tragic death story to the general public.
We need your support: after a year and ten months they [the perpetrators] are still walking free . . . They deserted Talya who died and fled Mexico. We begin a struggle to bring them to justice. Enter the link and donate for "Justice for Talya Nadav." For us. For everyone. Because we all travel abroad. Us, our children, our friends. We might all find ourselves in a similar situation. [The Page Remembering Talya Nadav with a smile, March 9, 2017] This example demonstrates how the admin discursively positions the deceased and the death story as a matter of collective interest. She builds on the shared value of justice to mobilize social engagement and support in the form of crowdfunding. The deceased becomes a symbol, yet this is achieved not by appealing to themes associated with national sacrifice and gratitude, as in the case of the soldiers, but by appealing to a sense of social responsibility. These dynamics resonate with Walter's (2015) observation, that in "contemporary culture's celebration of vulnerability . . . victims are now as or more likely to be commemorated as heroes" (p. 13).
Furthermore, even when death is not framed in terms of victimhood, admins still position the deceased as a valuable collective symbol. They do so by portraying her special virtues and unique character. In the case of Osnat Shemesh, a backpacker who died in a weather related bus crash in Nepal, the admins state: "We've chosen to take 'the life according to Osnat' and turn it into legacy, into a will." Here, too, the deceased is elevated, as her life is presented in hindsight, as embodying shared values with which the audience can identify. Themes concerning national sacrifice and collective gratitude are altogether absent, yet the deceased is framed as a collective symbol. Admins supply quotes by the deceased, which they frame as a motto or a legacy (a practice employed in the mourning of celebrities. See Harju, 2015, p. 137), and share stories about her life and highlight her virtues. These acts of discursive positioning serve to supply an account for the reason that that specific person is worth remembering.
---
The page followers' count: admins' efforts to increase the network
If someone is worth remembering, her/his memorial Page should be worth following. The followers count shows how many users are following a Page. In quantitative terms, this index measures circulation and exposure, i.e., the size of the network (Ellison & Vitak, 2015). In qualitative terms, the followers' index helps assess the Page's popularity and social impact, and the social capital that Page admins have come to possess.
The average number of followers on the Pages we sampled is 13K, ranging between 1.2K to 40.3K. Explicit efforts to gain followers, Likes, and Shares appear in all 18 cases, pursued through repeated and explicit requests by the admins ("Please share the Page with your friends. Thank you"). In one case, admin offers a small token in the shape of bracelets to new followers:
Enter the "Osnat's Butterflies" Facebook Page, Like the Page, and you can get free bracelets . . . [We] warmly ask that you "Like" the Page "Osnat's Butterflies," and request that the bracelets will be sent to you. [The Page Osnat Shemesh -The sun will never set, July 11, 2015] The butterfly bracelets are part of a social initiative propelled by this Page's admins for promoting good deeds and giving, in the spirit of "Pay it Forward." This initiative was established in memory of Osnat Shemesh, the backpacker who died in Nepal, and had a butterfly tattoo on her shoulder. Osnat's Butterflies Page has over 33K followers, and the social initiative it promotes has reached over 40 countries worldwide. The goal of the Page is to increase public awareness and support for the project by documenting and posting butterflies' paintings and bracelets around the world, alongside moving stories about Osnat Shemesh (the deceased). In this way, the Page accomplishes the goals of memorial Pages, as implied in the categories we discussed above: building a community, engaging interest, and turning the deceased into a public figure.
The idea of publicizing the deceased is exemplified quite clearly also in the following post, taken from a Page in the memory of Police Assistant Commissioner Eytan Bar. As indicated earlier, Bar ended his own life in the wake of an investigation he was under. The admin of the Page is his daughter ("daddy's girl"):
You're all invited to Like and Share the Page in memory of our father, so no one will forget this angel who wholeheartedly gave his life to the country. [The Page In Memory of ACOP Eytan Bar, July 26, 2015] This short message performs the transition from a personal loss ("our father") to one that is collective and public ("gave his life to the country"). Such texts hint at the perceived connection that users make between online participatory acts, such as Follow, Like, and Share, and participatory acts of a cognitive or emotive nature like memory, recognition, and esteem. The admin uses the conjunction word "so" ("so no one will forget") to form a causal connection between Like/Share and a public memory, between visible online engagement and cognitive or emotive implications.
---
Offline page activity: initiatives and events
The Page Likes that we examined offer only a "glimpse" (Bernstein et al., 2013, p. 21) of how admins can evaluate the activity of their audience. In the case of Facebook memorial Pages, the activity extends beyond the online sphere and involves production, promotion, and documentation (uploading and posting pictures) of offline initiatives and events. The initiatives vary, reflecting the sociocultural differences found in our sample, which, in turn, reflect pre-existing memorial practices in Israeli society. Some cases have a more spiritual or religious orientation-inauguration of a Torah scroll and other Jewish rituals; in other cases, there are sporting events-racings, soccer tournaments, mass Zumba workouts; and yet, others take the shape of intellectual or educational activities-talks at schools, Mind Sports Olympiad, etc. We can therefore see that, in most cases, the admins promote multiple events and initiatives throughout the year, which make the Page active constantly, rather than only around a single, annual memorial event. The frequent activity on the Page serves to maintain an ongoing interaction with the network and to establish the Page as an appealing and vibrant site.
Despite the differences in terms of content, we found several similarities in the ways admins communicate and promote these initiatives. First, the format: In 89% of the cases, event announcements (information about an event) take the shape of a photo-a professionally designed flyer-and not a textual post. The flyers are visually stylized and convey an impression of a formal invitation. The second similarity concerns addressivity, or who the posts address. These invitation posts are directed at the general public, calling for as many people as possible to join the community-turned-network and partake in its activities. Third, in most of these posts, similar keywords are used, evoking themes that concern respect, recognition, and togetherness. Consider these examples: We invite you all to come, watch and participate in the heritage of our father. It is important for us that a large crowd will show up, so that it will be respectable. The tournament will take place in Modi'in, and silicone bracelets will be sold for 10 shekels [3 USD] with the inscription "Love thy friend as thyself -in the spirit of Eytan Bar's path." We will donate the money to the same places that our father used to support. We are looking forward to seeing you. Spread the word! [The Page In Memory of ACOP Eytan Bar, June 12, 2017] Both examples include the words "honor" and "respect" (which in Hebrew are the same word, kavod). The notion of respect is significant, and the crowd plays an important role in its amassment. Indeed, the presence of a large crowd is the very mechanism through which respect and honor are generated and publicly assessed. Hence the address is directed at "the general public" and "you all." Moreover, the second example ends with the directive "Spread the word!," explicitly seeking to reach beyond the Page followers. The Admin makes use of the network (followers) alongside the platform affordances in order to reach as many people as possible, to generate large attendance, and to amass respect.
As part of the production of multiple memorial events and initiatives, admins often address the followers with various requests for resources and participatory actions. These actions range from physical attendance, through volunteering and contributing one's skills and knowledge (video editing or teaching Zumba), to purchasing memorial merchandise and donating money. Requests for resources include, as we can see, economic capital (Bourdieu, 1986), but also practical support, which is appreciated as a form of social capital resources (Xu et al., 2021). The extensive memorial activity and the involvement or harnessing of wide crowds (representing "the public") in its production, fulfill the three goals of memorial Pages we detailed earlier: Community, Interest, and Public Figure.
These three common categories of memorial Pages can be seen as reflecting three aspects or dimensions of the same process. In this process, users create and maintain memorial Pages, which come to serve as mechanisms for the accumulation of social capital resources. These categories are interrelated and represent different aspects or stages of the same process. This process entails the assumption that at stake here is a collective interest, which results in attempts at creating a community, and establishing the deceased as a public figure.In fact, turning the deceased into a public figure builds on the size of the community and the degree of its members' engagement and interest.
The ongoing activity on memorial Pages, including the positioning of the deceased and the various online and offline events, are all put in the service of the same three goals or stages in this process: building a community (with high levels of involvement and commitment), engaging interest, and ultimately positioning the deceased as a matter of public interest, in other words, as a public figure worth remembering (Figure 1).
---
Memorial merchandise: economic and social support
The second example above draws attention to another widespread practice we observe in our data, namely selling merchandise in memory of the deceased (in this case silicone bracelets). In 61% of the cases in our sample, this practice appears to be popular and in some cases, several types of merchandise are sold through the Page. The examples vary: Tshirts, baseball hats, bumper stickers, memorial candles, and recently also face masks (due to . All the products are imprinted with the full name of the deceased alongside an image, a slogan, or a quote that is made to be associated with her/him. Wearing this memorial merchandise means carrying the deceased's memory offline in an embodied fashion, continuing and honoring his legacy. As Harju (2015) puts it, apropos her discussion of celebrity commemoration on YouTube, "materiality anchors meanings" (p. 136).
This finding is significant because branded merchandise is generally associated with celebrities and not with ordinary people. Designing and selling merchandise in memory of an individual, therefore, convey a message that he/she was a famous person or should be famous posthumously. Furthermore, since nearly all sold merchandise are wearable, they promote offline public display of the deceased and enhance her status as a public figure. This closely relates to our earlier finding about the frequent employment of the Public Figure category and confirms our argument about the underlying motives and goals of admins of memorial Pages.
The admins use the memorial Page to promote, distribute, and sell this merchandise, and in line with the moral discursive positioning on the Page, they add a moral value to those products and to the act of buying and using them. Consider these two examples: "Wearing the bracelet commits the wearer to maintain the values you [the deceased] represents and in this way to become a better person," and "10 Shekels for your contribution and involvement in the Observatory project in memory of Ofek. So friends, share and get yourself a new bracelet for a worthy cause." 2 The purchase of memorial merchandise emerges as a value laden moral action because (1) the deceased is consistently portrayed as a special person whose story carries social meaning and significance and bears moral value; (2) the money that is collected is directed to worthy and charitable causes, such as donations; and (3) participatory actions such as buying memorial merchandise supports the (often bereaved) admins. The support that admins receive is twofold: it is financial (economic capital, Bourdieu, 1986), but also social and emotional (social capital, Williams, 2006). The action of purchasing a memorial merchandise reflects both interest and involvement, and enhances the sense of recognition with regards to the social significance of the deceased. The admins are well aware of these meanings and in response, express their gratitude readily and frequently as we show below.
---
Expressing gratitude: from followers to partners
Alongside the multiple repeated requests and invitations that Page admins direct at the Page followers, they also make sure to thank them devotedly. In doing so, the admins' tone is rather informal, personal, friendly, and enthused. They routinely express gratitude and show their appreciation to followers and their engagement. Every action counts. From Like and Share, through money donations, to physically attending events-admins show that no activity goes unnoticed. They highlight the importance of these actions as not merely helpful, but truly vital for the memorial Page and its moral goal. Followers thus become an integral part of the Page and its activity, or in other words, they are repositioned as partners. Indeed, sometimes admins note this explicitly: "We are grateful for having such partners as yourselves."
At stake here is a significant "status promotion" for the followers, which is pursued vis-a `-vis Facebook's affordances and hierarchical terminology: Admins, who manage, approve, curate, edit and produce content, and followers, who consume it. By symbolically "upgrading" the followers to the status of partners, admins imbue the followers with a sense of importance and enhance their commitment and engagement with the Page. In this way, they encourage the followers to contribute more: more Likes, content, resources, and engagement.
The following example nicely captures this circle of encouragement-engagement here in relation to the inauguration of a Torah book. Wow, how exciting. . .. Thanks to you we achieved the goal!!! . . . with every passing day, the hug we received grew greater and greater. Thanks to you. . . to your shares. . . to your devotion . . . more than 100,000 shekels were raised in the last couple of days for the commemoration of Ofek!! [The Page In the Memory of Ofek Noy H.Y.D, September 8, 2016] Communication here seems spontaneous and informal, and while the accomplishment is framed as mutually achieved ("we achieved the goal!!!"), gratitude is clearly expressed and extended to the followers ("Thanks to you" and "to your shares"). Followers' engagement is described as a "hug," a nonverbal act that indexes affection, support, and closeness. Thus, admins discursively position the followers, who are otherwise strangers, as helpful in extending love and support. This finding resonates with Stage and Hougaard's (2018) discussion about "caring crowds" (p. 79), in which love and care are not only expressed through words but also through "material practices" (p. 94). In the cases they observe-two public Facebook Groups that were created for two children diagnosed with cancer-crowdfunding was a dominant practice that was motivated and energized by sharing the personal stories of illness and suffering, alongside gratitude expressions by the parents who run these Groups.
In the following example, the mother of Osnat Shemesh (mentioned earlier) nicely illustrates how to direct attention to the followers.
When you experience the most excruciating pain possible, you hold on to any bit of light, like a wounded animal. It seems like this is the only way to survive. In the last couple of months, family and friends have completely embraced us, and I will forever owe them my life and my sanity. I want now to talk about the people we don't know; about bits of light that radiate from people who never knew us or Osnat. These people, who send us comforting messages, strangers who took time off their everyday routine . . . to all these beautiful souls . . . we wish to say thank you. Thanks for seeing us. Thanks for taking time off for us. Thanks for helping us regain our faith in goodness. [The Page Osnat Shemesh-The sun will never set, December 25, 2014] Emphasizing the pain this admin is experiencing ("most excruciating pain possible"), enhances the moral value of the followers' benevolent participatory actions. The admin, a bereaved mother, mentions and thanks family and friends,then directs special attention to other people. She pursues this by the meta-discursive statement "I want now to talk about. . .," through which she signals a thematic shift to what will be the focus of her message, namely to those who deserve the utmost gratitude. These are "strangers" -users with whom she is not familiar, who showed showed interest and "took time off," and who served as an audience ("Thanks for seeing us") and a network. The admin describes the visitors and the followers of the Page as radiating "bits of light" and as "beautiful souls" who help restore faith in goodness.
Posts of this type (re)confirm the moral value that engaging the Page carries, framing it as a socially valued action. This is a result of the social solidarity and support that followers direct at the admins, often bereaved users in pain, and of the fact that memorialization is generally held as a socio-moral project (Noy, 2015, p. 39) -more so when the deceased is consistently portrayed as a hero, a special person, a respectable public figure worth remembering.
Such expressions point at how admins acknowledge having received emotional support from their network of followers. Recall that Putnam (2000) and Williams (2006) associated emotional support and mobilization of solidarity to bonding social capital, that is, to interactions that are typical of strong ties and closer relationships. Interestingly, our findings suggest that such resources may also be obtained through what we can call "bridging relations" and interactions with a broad network of mostly strangers. Admins explicitly and repeatedly link emotional support to such parameters as engagement with the Page, the economic capital gained through the network, and the practical support followers provide.
---
Conclusion
In this article, we explored Facebook Pages created in memory of ordinary people with the aim of raising social awareness and public remembrance of their death. We offered a new perspective on these memorial Pages and suggested viewing them through the scope of the social capital approach. In line with existing literature (Ellison & Vitak, 2015), our findings demonstrate that the most significant factors of social capital processes are the size and composition of one's network and the patterns of interaction.
We identified different communicative practices that admins pursue in the aim of reaching an audience, increasing the size of their network (i.e., followers count), and enhancing its activity and engagement. In addition, we analyzed how admins interact with their network-a multi-layered communication that serves the multiple functions they seek to accomplish. On the one hand, admins use a formal register, and the notion of respect is salient as they try to establish a sense of formality, authority, and recognition towards the deceased and the Page. On the other hand, they use a highly personal, enthused, and emotional register partly because of the engaging effect of affective performances, and partly because of the affect-laden quality of digital mourning practices (Giaxoglou & Do€veling, 2018). When a user performs an increased emotional sharing, it activates reactions of the networked audience in the shape of an exchange of emotional and support resources (Baym, 2010, in Giaxoglou et al., 2017), which have been shown to reinforce tie strength (Xu et al., 2021). In a discussion on networked emotions and sharing loss online (Special Issue of Journal of Broadcasting & Electronic Media, 2017), Giaxoglou et al. (2017) observe the "increasing mobilization of emotion as a commodity" (p. 7), and Sabra (2017) further notes that the potential for economic and emotional capitalization is integrated into the Facebook platform (p. 31).
Our findings flesh out these observations by showing how admins carefully and strategically select where and when to use a formal and factual register (e.g., the About section, biographical and informative posts), and when to use a more personal and friendly emotional tone (e.g., posts extending gratitude and appreciation that yield an encouragementengagement circle). Like, share, and remember
The expected engagement, and with it requests for resources that admins post, range from online participatory acts to purchasing memorial merchandise, donating money, physically attending events, contributing one's skills to the production of initiatives, and so on. The accumulation of resources through the Page is a social capital process par excellence, a process in which ordinary users become admins and create their own network, gradually expand it, and harness it by employing platform affordances to achieve their goals.
Network members are mostly strangers. While previous studies note that strangers are unwelcomed on Facebook memorial spaces (Rossetto et al., 2015;Walter, 2015), our study suggests that strangers are more than welcome and are deeply appreciated. In an effort to portray the deceased as a public figure and to establish a state of public remembrance, admins address the largest audience possible. Pages, as opposed to other Facebook subplatforms, afford this publicity and capitalization, which users acknowledge and take advantage of from the very early stages of creating and naming the Page. This complements studies that have examined social capital processes on Facebook and relationship maintenance behaviors of existing connections (i.e., Facebook friends). Here, we examined the creation, maintenance, and strengthening of new connections with strangers (i.e., Facebook followers), or parasocial relations, corroborating previous observations of memorial Pages, which "gather strangers rather than friends" (Klastrup, 2015, p. 147).
However, despite existing literature that links between strangers and "weak ties," and bridging social capital outcomes (Putnam, 2000;Williams, 2006), in the case of the memorial Pages we studied, broad networks of followers that consist mostly of strangers, in fact, facilitate bonding social capital outcomes, such as solidarity and emotional support. Admins recognize this support and pursue a circle of encouragement-engagement that motivates participatory activities.
---
Limitations and future directions
This study has several limitations that future studies can address. First, due to the relatively small sample size, we could not draw conclusions relating to the connections or correlations between the cause of death and the activity or dynamics on the Page. Second, future studies may examine the collectivization of personal mourning and related social capital processes on other platforms with different affordances and dynamics (such as visual versus textual platforms). We believe that much of the transferability of these insights rests on platforms' public quality or publicity. Finally, while we focused on memorial Pages, future research can explore social capital processes on Pages in different contexts and themes. Future research may also focus on social capital in relation to "special users," such as admins (rather than ordinary users), who employ special affordances and pursue special practices.
---
Data availability
The data underlying this study will be shared on reasonable request to the corresponding author. |
Experts in preventive medicine and public health have long-since recognized that health is more than the absence of disease, and that each person in the 'waiting room' and beyond manifests the social/political/economic ecosystems that are part of their total lived experience. The term planetary health-denoting the interconnections between the health of person and place at all scales-emerged from the environmental and preventive health movements of the 1970-1980s. Roused by the 2015 Lancet Commission on Planetary Health report, the term has more recently penetrated mainstream academic and medical discourse. Here, we discuss the relevance of planetary health in the era of personalized medicine, gross environmental concerns, and a crisis of non-communicable diseases. We frame our discourse around high-level wellness-a concept of vitality defined by Halbert L. Dunn ; high-level wellness was defined as an integrated method of functioning which is oriented toward maximizing the potential of individuals within the total lived environment. Dunn maintained that high-level wellness is also applicable to organizations, communities, nations, and humankind as a whole-stating further that global high-level wellness is a product of the vitality and sustainability of the Earth's natural systems. He called for a universal philosophy of living. Researchers and healthcare providers who focus on lifestyle and environmental aspects of health-and understand barriers such as authoritarianism and social dominance orientation-are fundamental to maintaining trans-generational vitality at scales of person, place, and planet. | Introduction
HEALTH: i. The state of an animal or living body, in which the parts are sound, well organized and disposed, and in which they all perform freely their natural functions; in this state the animal feels no pain; this word is also applied to plants. ii. Sound state of the mind; natural vigor of faculties. iii. Sound state of the mind in a moral sense; goodness.
Health as defined in Scientific Dictionary, 1863 [1] Viewed through the prism of life (Greek; bios) and ways of living (Greek; biosis), health is an expansive term which has long-since defied concrete definition. In 1946, the World Health Organization's constitutional statement [2] maintained that health is 'complete physical, mental Figure 1. High-level wellness is applicable to organizations, communities, nations, and humankind as a whole. In an era of gross environmental concerns and a crisis of non-communicable diseases, personalized medicine must be increasingly viewed in the context of planetary health [image by author, S.L.P.].
Remarkably-even without our current, sophisticated understanding of biodiversity losses, environmental degradation, climate change, and resource depletion-Dunn underscored that highlevel wellness is predicated upon the health of the Earth's natural systems [5]. In other words, discussions of high-level wellness-whether for person or civilization-must always consider the environment, and this must include broad aspects of the natural environment on which humans depend. Dunn was underscoring the principles of what is now termed 'planetary health'.
The term planetary health, popularized in the 1980-1990s, underscores that human health is intricately connected to the vitality of natural systems within the Earth's biosphere. Coincident with the rise of environmentalism, preventive medicine and the self-care movements of the 1970s, the artificially drawn lines between personal, public, and planetary health began to diminish [6,7] Dunn's concept of high-level wellness was referenced in articles which discussed "a different philosophical framework through which individual, community, environmental and planetary health can be better understood in a broad and integrated fashion" [8] (see Figure 2). High-level wellness is applicable to organizations, communities, nations, and humankind as a whole. In an era of gross environmental concerns and a crisis of non-communicable diseases, personalized medicine must be increasingly viewed in the context of planetary health [image by author, S.L.P.].
Remarkably-even without our current, sophisticated understanding of biodiversity losses, environmental degradation, climate change, and resource depletion-Dunn underscored that high-level wellness is predicated upon the health of the Earth's natural systems [5]. In other words, discussions of high-level wellness-whether for person or civilization-must always consider the environment, and this must include broad aspects of the natural environment on which humans depend. Dunn was underscoring the principles of what is now termed 'planetary health'.
The term planetary health, popularized in the 1980-1990s, underscores that human health is intricately connected to the vitality of natural systems within the Earth's biosphere. Coincident with the rise of environmentalism, preventive medicine and the self-care movements of the 1970s, the artificially drawn lines between personal, public, and planetary health began to diminish [6,7] Dunn's concept of high-level wellness was referenced in articles which discussed "a different philosophical framework
As the global health burdens have shifted from infectious to NCDs, greater emphasis has been placed on the health-mediating role of social determinants, lifestyle, and the total lived environment. The health implications of anthropogenic threats to life within the biosphere cannot be uncoupled from discussions of the individual, community, and global health. Recent endeavors such as the Lancet Commission on Planetary Health [9] and The Canmore Declaration [10] have re-emphasized that public health, biopsychosocial medicine, and planetary health are one-and-the-same.
---
Roadmap to the Current Review
Here in our narrative review, we will revisit Dunn's high-level wellness and explore its place in the emerging planetary health paradigm. First, we discuss some of the origins of the high-level wellness concept and describe how it manifests in contemporary clinical care. Next, we examine the concept of planetary health, its historical origins, and the global movement which now considers the health of civilization and the Earth's natural systems as inseparable. With this background in place, we argue that the concept of high-level wellness provides an essential framework for health promotion and clinical care in the modern landscape; it allows scientists of diverse fields-no matter how reductionist the scope of their inquiry-to see the large-scale relevancy of their work; it provides healthcare providers a broader vision of human potential with individuals as living embodiments of accumulated experiences shaped by natural and anthropogenic (i.e. social, political, commercial etc.) ecosystems-rather than a vision limited to a neutral disease-free set point.
Dunn's high-level wellness and planetary health (which we argue are synonymous) requires discourse concerning values, our connectedness to one another, our sense of purpose/meaning, and our emotional connections to the natural world. High-level wellness also demands discussion of authoritarianism, social dominance orientation, narcissism, and other barriers to vitality of individuals, communities and the planet. Finally, we emphasize that experts in environmental health promotion and lifestyle medicine are ideally positioned to educate and advocate on behalf of patients and communities (current and future generations), helping to promote vitality and safeguard the health of person, place, and planet.
---
High-Level Wellness
"Wellness is conceptualized as dynamic-a condition of change in which the individual moves forward, climbing toward a higher potential of functioning. High-level wellness for the individual is defined as an integrated method of functioning which is oriented toward maximizing the potential of which the individual is capable, within the environment where (they) are functioning. This definition does not imply that there is an As the global health burdens have shifted from infectious to NCDs, greater emphasis has been placed on the health-mediating role of social determinants, lifestyle, and the total lived environment. The health implications of anthropogenic threats to life within the biosphere cannot be uncoupled from discussions of the individual, community, and global health. Recent endeavors such as the Lancet Commission on Planetary Health [9] and The Canmore Declaration [10] have re-emphasized that public health, biopsychosocial medicine, and planetary health are one-and-the-same.
---
Roadmap to the Current Review
Here in our narrative review, we will revisit Dunn's high-level wellness and explore its place in the emerging planetary health paradigm. First, we discuss some of the origins of the high-level wellness concept and describe how it manifests in contemporary clinical care. Next, we examine the concept of planetary health, its historical origins, and the global movement which now considers the health of civilization and the Earth's natural systems as inseparable. With this background in place, we argue that the concept of high-level wellness provides an essential framework for health promotion and clinical care in the modern landscape; it allows scientists of diverse fields-no matter how reductionist the scope of their inquiry-to see the large-scale relevancy of their work; it provides healthcare providers a broader vision of human potential with individuals as living embodiments of accumulated experiences shaped by natural and anthropogenic (i.e. social, political, commercial etc.) ecosystems-rather than a vision limited to a neutral disease-free set point.
Dunn's high-level wellness and planetary health (which we argue are synonymous) requires discourse concerning values, our connectedness to one another, our sense of purpose/meaning, and our emotional connections to the natural world. High-level wellness also demands discussion of authoritarianism, social dominance orientation, narcissism, and other barriers to vitality of individuals, communities and the planet. Finally, we emphasize that experts in environmental health promotion and lifestyle medicine are ideally positioned to educate and advocate on behalf of patients and communities (current and future generations), helping to promote vitality and safeguard the health of person, place, and planet.
---
High-Level Wellness
"Wellness is conceptualized as dynamic-a condition of change in which the individual moves forward, climbing toward a higher potential of functioning. High-level wellness for the individual is defined as an integrated method of functioning which is oriented toward maximizing the potential of which the individual is capable, within the environment where (they) are functioning. This definition does not imply that there is an optimum level of wellness, but rather that wellness is a direction in progress toward an ever-higher potential of functioning . . . high-level wellness, therefore, involves (1) direction in progress forward and upward towards a higher potential of functioning, ( 2) an open-ended and ever-expanding tomorrow with its challenge to live at a fuller potential, and (3) the integration of the whole being of the total individual-(their) body, mind, and spirit in the functioning process . . . high-level wellness is also applicable to organization, to the nation, and to (humankind) as a whole".
Halbert L. Dunn, MD, PhD. Canadian Journal of Public Health, 1959 [11] In two notable papers-both published in 1959 [3,11]-biostatistician and public health physician Halbert L. Dunn conceptualized the idea of 'high-level wellness' (Box 1) for humankind and civilization at-large, maintaining that "wellness is not just a single amorphous condition . . . but is rather a fascinating and ever-changing panorama of life itself, inviting exploration of its every dimension" [3]. In this context, he included population pressures, rising rates of mental and functional illnesses, and the rapid speed of technological growth (especially in communications). Moreover, he stated: "it is probably a fallacy for us to assume, as so many of us have done, that an expansion in scientific knowledge can indefinitely counterbalance the rapidly dwindling natural resources of the globe" [3]. In other words, Dunn was acutely aware, even in 1959, that the ability to obtain high-level wellness-at individual and civilization-wide scales-was predicated on the health of the planet. "High-level wellness is applicable not only to the individual but also to all types of social organizations-to the family, to the community, to groups of individuals, such as business, political or religious institutions, to the nation and to (humankind) as a whole. For each of these aggregates, it implies a forward direction in progress, an open-ended expanding future, interaction of the social aggregate and an integrated method of functioning which recognizes the interdependence of (humans) with other life forms".
Halbert L. Dunn, MD, PhD. 1966 [12] Dunn's context for high-level wellness was beyond even national boundaries; in the era of rapid change, no longer could health be viewed as exclusively a local phenomenon: "The effects of these (environmental/social) changes ripple outward to all parts of the physical environment, affecting the entire ecology on which man is dependent, and also penetrating into the deepest recesses of his inner world" [13]. The search for high-level wellness in life (Greek, bios) cannot be separated from our individual and collective mode of living (Greek, biosis) or lifestyle; to understand such connections, Dunn advocated for educational efforts to "develop interest in biology on a vast scale, so that it would become of major interest to all. This would mean acquiring a deep interest in life-in the life process itself " [14]. Related to this, Dunn emphasized a need to understand how human attitudes to other forms of life (and the natural environment in general) are formed.
The prerequisite to individual and societal high-level wellness, Dunn contended, is the maintenance of a sense of purpose and opportunities for creative expression. On the other hand, he argued that the barriers to high-level wellness include authoritarianism, clinging to dogma, and lack of critical analysis skills. He encouraged health and medical bodies to self-reflect. Barriers to high-level wellness, Dunn argued, are manifest in uncritical allegiance to "teams" in political, economic, occupational, academic, and other professional and social spheres; in particular, the inability to adjust beliefs and communication based on advancing knowledge is a major impediment.
Dunn maintained that global wellness in the modern era is predicated upon providing opportunities (especially early in life) to see common ground, teaching children critical appraisal skills, and learning the value of listening to opposing views while 'searching for points of mutual agreement'. Dunn proposed a 'universal philosophy of living' which focused not on what individuals were 'against', but rather what they would be 'for': "a philosophy which will permeate the minds and hearts . . . a philosophy which men and women of good will, regardless of race, creed and nationality, can be for. A unifying type of philosophy which can be embraced and lived by all, within their own cultural background" [15].
He also called for greater research investments to be directed toward an understanding of the social, biochemical, physiological, and psychological pathways to the goal of high-level wellness; Dunn maintained that high-level wellness was itself a way of life-a lifestyle which involved a sense of purpose and meaning-one which maximized the odds of achieving the fullest potential. In its simplest form, high-level wellness equates to vitality; humans can experience the upper ranges of wellness when there is a feeling of 'zest in life', abundant energy, a tingle of vitality, and a feeling of 'being alive clear to the tips of your fingers'. However, Dunn cautioned that in 20th century modernity, zest was being confused with 'something that gives us a very momentary "lift"' [14]. In the 21st century, the iron pyrite of zest and aliveness is all-too-often sold to the public in the form of "energy" drinks [16].
Vitality has since become a measurable psychological construct and the subject of intense research scrutiny. Several vitality scales have been validated (as well as vitality subscales within larger assessments such as the Profile of Mood States and the SF-36), and researchers have linked vitality to various health-related outcomes; for example, vitality is emerging as a surrogate marker of reduced risk of NCDs, psychological wellbeing, and better life-course health [17][18][19][20][21]. In line with Dunn's commentaries, vitality is captured on scales as 'approaching life with excitement and energy, feeling vigorous and enthused; living life as an adventure; feeling alive and activated; zest for life'. It is unclear if vitality is a cause or consequence of a healthy diet, exercise, social support, and other lifestyle habits such as spending time outdoors in natural environments-it is likely a mixture of contribution and cause [22][23][24][25]. The concept of high-level wellness may be identified in so-called blue zones where longevity, chronic disease resilience, and quality of life are found in tandem.
---
Preventive Medicine, Public and Planetary Health
The term planetary health emerged from the annals of preventive medicine, health promotion and the environmental health movement; in 1972, physician ecologist Frederick Sargent II, MD advocated for a greater understanding of the interrelations between the 'planetary life-support systems' and health (not simply the absence of disease) [26]. In 1974, Soviet bio-philosopher Gennady Tsaregorodtsev called for novel and integrative approaches to 'planetary public health' [27]. He also advocated for a greater understanding of the biopsychosocial needs of humans in the context of ecosystems at micro-and macro-scales. Both writers underscored the urgent need for information-gathering and actionable steps in relation to the human health sequelae of environmental degradation-the focus on preventing unanticipated consequences (those corrosive to wellness) of human-induced changes to the natural environment.
On the environmental side of health, work of multidisciplinary scientists (especially ecology, toxicology, geography, and other environmental sciences) was folded into definitions of health by environmentalists and various advocacy groups. For example, in 1980, the environmental group Friends of the Earth expanded the World Health Organization definition of health to include ecological and planetary health inputs: "health is a state of complete physical, mental, social and ecological well-being and not merely the absence of disease-that personal health involves planetary health" [28].
At the same time, these sentiments were echoed within the growing holistic health movement of the 1980s which argued for: "greater attention to prevention . . . (and) a different philosophical framework through which individual, community, environmental and planetary health can be better understood in a broad and integrated fashion" [8]. Nursing, a profession which has been unified by deeper understandings of the words 'health' and 'care', was progressive in underscoring planetary health: "the health of each of us is intricately and inextricably connected to the health of our planet" [29]. By the early 1990s, leaders in nursing advocated for a need to "understand health as a reintegration of our human relationships with nature . . . (and maintain) openness to nature's healing power" (and a) "broader ecologically-informed perspective on health" [30].
By the mid-1990s, the 'wellness movement' had, according to experts in health education, "added a sixth dimension of health (that is, in addition to physical, social, emotional, intellectual, and spiritual), environmental or planetary, health. This dimension involves both micro (immediate, personal) and macro (global/planetary) environments" [31]. Health education textbooks maintained that we must "now view health as the presence of vitality-the ability to function with vigor and live actively, energetically, and fully. Vitality comes from wellness, a state of optimal physical, emotional, intellectual, spiritual, interpersonal, social, environmental, and even planetary wellbeing" [32]. Viewed this way, the word health cannot be disassociated from the words equity, access, and opportunity.
It is also important to point out that the 'planetary health' movement which began in the 1980s was an extension of indigenous knowledge and ideation: scholars have underscored that indigenous cultures have long-since understood that "human health and planetary health are the same thing" (or "to harm the Earth is to harm the self ") [33]. For example, Lori Alvord, MD, the first conventionally-trained female Navajo surgeon in the United States, stated: "I cannot think of a single thing that would be more important to us (North American indigenous peoples) than to have a pure environment for our health . . . human health is dependent upon planetary health and everything must exist in a delicate web of balanced relationships" [34].
An understanding of the links between human and planetary health among indigenous peoples is a product of emotional bonds with the natural environment and effective, trans-generational knowledge transfer [35,36]. Indeed, the ecopsychology movement of the early 1990s advocated for "a planetary view of mental health . . . to live in balance with nature is essential to human emotional and spiritual well-being, a view that is consistent with the healing traditions of indigenous peoples past and present" [37].
In sum, the environmental health, preventive medicine, and wellness movements of the late 20th century often included a planetary health perspective. However, it must be recognized that the foundations of the contemporary planetary health concept are a product of indigenous science and medicine, and longstanding awareness that human health (that is, wellness) is dependent upon the vitality of the natural environment [38]. In the context of high-level wellness, preventive medicine is tasked not only with helping to prevent the path to specific diseases, but to prevent departure from vitality. We turn now to examine the accelerating pace at which the term planetary health has moved into the glossary of science and medicine.
---
Planetary Health Moves to Mainstream
"Even with all our medical technologies, we cannot have well humans on a sick planet. Planetary health is essential for the well-being of every living creature. Future healthcare professionals must envisage their role within this larger context, or their efforts will fail in their basic objective. Although until recently healthcare providers could ignore this larger context, such neglect can no longer be accepted".
Thomas Berry, 1992 [39] Although the term planetary health was used frequently by various experts, researchers, clinicians, academics, and advocates, only recently has the concept entered the lexicon of mainstream science and medicine. In 2015, the Rockefeller-Lancet Commission on Planetary Health published its landmark report; the expansive document-which covered political, economic, and social systems-formally defined planetary health as "the health of human civilization and the state of the natural systems on which it depends", with its stated goal to find 'solutions to health risks posed by our poor stewardship of our planet' [9]. As a crude measure of the report's impact, results of a PubMed search for "planetary health" demonstrate that over 70% of the citations have been published post-2014. The Commission report, financially supported by the Rockefeller Foundation, has already been cited over 300 times on Google Scholar; it has also spawned a dedicated Lancet Planetary Health journal. There is little doubt that the Commission report and the efforts of other groups have moved planetary health into widespread discussion.
The contemporary planetary health concept is meant to break down silos and galvanize research efforts so that there is greater awareness of how specific pieces of research work toward solving the (interrelated) grand challenges of our time; planetary health is, of course, the terrain of environmental impact assessments and strategic environmental assessments, climate indicators, and toxin-based units of analysis; however, in 2018, one of the leading voices in the current planetary health movement-Lancet Editor-in-Chief, Dr. Richard Horton-underscored that it is so much more:
"Planetary health, at least in its original conception, was not meant to be a recalibrated version of environmental health, as important as environmental health is to planetary health studies. Planetary health was intended as an inquiry into our total world. The unity of life and the forces that shape those lives. Our political systems and the headwinds those systems face. The failure of technocratic liberalism, along with the populism, xenophobia, racism, and nationalism left in its wake. The intensification of market capitalism and the state's desire to sweep away all obstacles to those markets. Power. The intimate and intricate effects of wealth on the institutions of society. The failure of social mobility to compensate for steep inequality. The decay of a tolerant, pluralistic, well informed public discourse. The importance of taking an intersectional perspective. Rule of law. Elites. The origins of war and the pursuit of peace. Problems of economics-and economists" [40].
We agree with this sentiment. Indeed, the future of planetary health in the context of preventive medicine and environmental health requires a greater understanding of a 'planetary health psyche'; by this we mean deeper insight into the ways in which emotional bonds are developed between person and place, and the collective cognitions and behaviors which have resulted in environmental degradation and 'Anthropocene Syndrome' in the first place [41]. This goes far beyond the now extensive research showing the health benefits-physical, emotional, cognitive, social, and spiritual health-of contact with natural environments [42,43].
The preventive form of planetary health is now an imperative; as stated by Harvard psychiatrist John E. Mack (1929Mack ( -2004)), we must develop a relational psychology of the Earth which allows us to "tell unpleasant or unwelcome truths about ourselves . . . to explore our relationship with the Earth and understand how and why we have created institutions that are so destructive to it . . . we in the West have rejected the language and experience of the sacred, the divine, and the animation of nature. Our psychology is predominantly a psychology of mechanisms, parts, and linear relationships. We have grown suspicious of experiences, no matter how powerful" [44].
The development of emotional connections with the natural world-and health-related associations with such emotional bonds-is now a measurable construct in the form of nature relatedness (see also, related validated instruments such as nature connectedness or nature connectivity scales) [45]. Nature relatedness scales are a means for researchers to evaluate individual levels of awareness of, and fascination with, the natural world; nature relatedness scores encapsulate the degree to which individuals have an interest in making contact with nature. While this body of research is far from robust, the available evidence indicates that nature relatedness is positively associated with general health, mental wellbeing, empathy, pro-environmental attitudes/behaviors, and humanitarianism (and negatively with materialism) [46][47][48][49][50][51].
The challenge for global researchers is to develop a more sophisticated understanding of how nature relatedness fits into the planetary health imperative; how is nature relatedness fostered and how is it influenced by cultural experience and socioeconomic variables [52,53]? What are the biological underpinnings of nature relatedness in relation to non-communicable disease [54]? How does it influence environmental behaviors and the political-economic viewpoints outlined by Horton [55]? Are high levels of nature relatedness a 'burden' in some cases? For example, in cases where environmental degradation and biodiversity losses are immediately apparent [56], it might be expected that rapidly changing environmental conditions would provoke distress (Box 2). Humanity is facing colossal, interconnected global challenges. It is now abundantly clear that human-caused climate change represents a threat to all of humanity. Extreme temperature and weather events, degraded air quality, and the spread of diseases via food, water, and alterations to the life of vectors (such as ticks and mosquitoes) are now a reality [57]. Climate change does not stand alone as a looming public health threat. It is coupled with environmental degradation (through industry and invasive species), biodiversity losses, grotesque health disparities, the global spread of ultra-processed foods, and what has been described as a 'pandemic' of non-communicable diseases [41,56,58,59]. The burden of these global threats is shouldered by the socioeconomically disadvantaged.
Only recently have researchers begun to tabulate the ways in which environmental degradation takes its toll on mental health. In areas where environmental degradation has already been significant, researchers see a worsening of mental health-described by some as 'ecological grief' [60]. There is an urgent need to study the ways in which climate change and environmental degradation not only contribute to NCDs, but also how they contribute to mental stress and diminish vitality [61][62][63].
---
Planetary Health vs. Authoritarianism
More than ever before, medicine, science, and health (at all scales) are political discussions [64][65][66][67]. A rapid change in communication technology and social media has accelerated the ability of misinformation to spread globally. We have now entered a strange era dubbed 'post-truth', [68], a time when it is no long tenable to be on the sidelines as a health 'care' spectator. However, in comparison to other professions and even the general population, US physicians show low levels of civic participation [69,70].
Recent elections in North America and Europe have underscored the ways in which public health is threatened by political authoritarianism [71,72]; however, authoritarianism and social dominance orientation are not constrained to the political arena and politicians. Rather, they can be found in many contemporary social structures, including those associated with westernized medicine [73] and science [74].
In his writings on wellness, Dunn underscored that authoritarianism is a significant barrier to global wellbeing; in order to remedy this, he encouraged greater inclusion of political science in health research and education. He also advocated for a greater understanding of leadership styles as influence on the health of groups, and broader awareness of the ways in which scientific findings are selectively misused. In particular he was concerned about the abuse of science by socially-dominant political elites and those with biased interests in the outcomes. During Dunn's time, the research on authoritarianism (as a psychological construct) was still in its infancy. Today, this area of research is far more robust, and it is much easier to determine the ways in which it interferes with health. Authoritarianism is described as expecting or requiring people to obey; favoring a concentration of power; limitation of personal freedoms. Scores on authoritarianism scales are associated with stigmatization of out-groups, a rigid adherence to mainstream convention, and broad aspects of prejudice [75][76][77]. Authoritarianism predicts intolerance to diversity and differing cultures, aggression toward out-group members, and hyper-vigilance to threats against non-conformism. It is also associated with a cognitive style devoid of fine-grained discourse and nuance; out-groups are labeled in simplistic, all-or-none fashion [78].
Social dominance orientation (SDO) is a related psychological construct that is characterized by attraction to hierarchy and areas of prestige found within social systems. SDO scales capture beliefs regarding the acceptability or entitlement of high-status groups to dominate other groups, and attitudes toward maintaining social and economic inequality. Higher scores on SDO scales are associated with lower empathy, and less concern for matters of social justice and inequalities [79]; conversely, these individuals are hyper-vigilant to threats-real and perceived-that might compromise privileged status and its benefits [80]. Researchers have shown that higher SDO predicts prejudice and diminishes awareness that power gained from the dominant social position is being used for personal gains [81,82]. The overlaps between SDO and authoritarianism have been consistently noted, such that researchers refer to the combination of SDO and authoritarianism as the "lethal union".
The relevancy of authoritarianism and SDO to planetary health is now obvious. Authoritarianism and/or SDO predict denial of the seriousness of climate change, lower levels of environmental concern, and a hierarchical anthropocentric view of nature [83][84][85][86][87] Many public health professionals are keenly aware of the threats posed by political authoritarianism. Indeed, recent elections in North America and Europe have been a catalyst in (re)emphasizing the importance of political science in personal, public, and planetary health [88].
Empathic, caring, civil-minded professionals that fill the ranks of global healthcare are obligatory humanists; because so many health threats-those linked to ecosystems and the biosphere, and infectious/NCDs alike-are oblivious to national boundaries, humanist healthcare professionals are, in turn, obligatory anti-nationalists. Thus, public, preventive, and environmental health is built upon vigilance for political authoritarianism. It is understood that the misguided actions of any one nation, or even one individual, can conspire against all of humanity. However, this does not mean that SDO or institutional authoritarianism is a problem to which science and medicine is immune. On the contrary, research shows that authoritarianism and/or SDO may be uncomfortably high among students at entrance to medical schools, increased through medical education, and reinforced at the institutional levels of medicine [89][90][91][92][93][94]; medicine in general, and the technical medical disciplines such as surgery in particular, maintain high levels of perceived status [91]. That is a problem not only for clinical care, but also for building (and maintaining) the public trust in science and medicine at-large.
Research is beginning to tease out the motivations of students who enter medical school as they relate to money and status, and connect these to characteristics such as low agreeableness and intolerance of opposing views [95]. Since experimental studies show that manipulating social status and power (in an upward direction) increases social dominance, and that SDO can be provoked by status reminders and cues such as money [81,96,97], medicine may need to look inward and examine its commitment to the principles of planetary health. Indeed, contemporary research supports Dunn's contention that individual (and in-group) authoritarianism is a barrier to the collective action required to support the core tenets of planetary health-that is, it blocks social rights-based movements (civil, gender, environmental, and otherwise.) [98].
As discussed in detail elsewhere [73], entering medical school with a high desire for social status, or with higher baseline levels of authoritarianism and social dominance orientation than societal norms-and to have such characteristics amplified through medical training and institutional structures-is at the heart of Horton's plea [40] for a planetary health agenda designed for meaningful change. How can science and medicine challenge an unhealthy status quo if it is unwilling or unable to confront its own contextual power hierarchies [99,100]? These are concerns which permeate healthcare-at-large. Higher SDO (even among healthcare professionals who are not medical doctors) is associated with an unwillingness to engage in inter-professional education [101]. This is likely to reflect more generalized shifts in societal goals and value systems away from meaningful life philosophy towards an emphasis on financial wealth as the dominant measure of success [102].
---
Conclusions
The contemporary concept of planetary health-which has its roots in the late-20th century preventive medicine and environmental health movements-emphasizes that health equates to vitality at scales of person, place, and planet. It asserts that preventive medicine is a broad term, one which extends to the planet's natural systems-the ecosystems and biodiversity upon which our own vitality depends. Planetary health is an adisciplinary unifying concept which allows researchers working in seemingly disparate branches of science and medicine to understand the relevancy of the toil provided by each group.
Specifically, we must advance the cause of planetary health by demonstrating a willingness to engage with and promote other disciplines. To this end, there are now encouraging examples of collaborative initiatives between health providers, regenerative agriculturalists, and local communities-notably developing regions of the world-with demonstrated community-wide benefits for health, wealth, employment, and environmental sustainability [103]. These integrative models provide a path forward for ensuring the health of people and planet.
In the context of planetary health, the urgent task for preventive medicine and environmental health is to provide deeper insight into the ways in which we develop relationships with nature, and how we feel, think, and respond to the natural world. This includes the biological, social, political, and economic underpinnings of nature relatedness (and related psychological constructs) and its impact on vitality at all scales. It includes a more fine-grained understanding of what prevents the planetary health goals set forth by the WHO and the Lancet Commission on Planetary Health report [9]. From our perspective, this means further study of authoritarianism and social dominance orientation (at individual, institutional and other scales) vis à vis the structures-including those found in politics, science, medicine, and elsewhere-which either support the status quo, or provide meaningful solutions to planetary health objectives. This applies equally to the injustices and inefficiencies of global systems, such as food and international trade systems, which also serve to undermine health and equality through biased authoritarian and neoliberal ideologies [104,105].
The idea that threats to the health of the person, the place (community), and the planet are distinct from each other is a mirage; this false notion has been challenged by environmental health and preventive medicine for decades. We have moved past the point at which such discourse is merely intellectual fodder. We argue that in 2019, one simply cannot claim to be a 'health' care professional without advocating forcefully for the planet. There are no healthy people on an uninhabitable planet, and we are fast heading there. If its true goals are realized, environmental health and preventive medicine at the planetary scale will, as Jonas Salk implored in 1984, place emphasis on the idea that we should want "those who follow us to look back on us as having been wise ancestors, good ancestors" [106].
---
Author Contributions: S.L.P. developed the commentary, project oversight and research analysis. A.C.L. provided the research analysis and developed the historical aspects of the manuscript. D.L.K. is responsible for the commentary oversight, research interpretation, critical review of manuscript, and input of public health perspectives. All authors contributed to the development and review of the manuscript. All authors read and approved the final manuscript. The artwork was created by S.L.P. |
Background: Men of low socioeconomic position (SEP) are less likely than those of higher SEP to consume fruits and vegetables, and more likely to eat processed discretionary foods. Education level is a widely used marker of SEP. Few studies have explored determinants of socioeconomic inequalities in men's eating behaviours. The present study aimed to explore intrapersonal, social and environmental factors potentially contributing to educational inequalities in men's eating behaviour. Methods: Thirty Australian men aged 18-60 years (15 each with tertiary or non-tertiary education) from two large metropolitan sites (Melbourne, Victoria; and Newcastle, New South Wales) participated in qualitative, semi-structured, one-on-one telephone interviews about their perceptions of influences on their and other men's eating behaviours. The social ecological model informed interview question development, and data were examined using abductive thematic analysis. Results: Themes equally salient across tertiary and non-tertiary educated groups included attitudes about masculinity; nutrition knowledge and awareness; 'moralising' consumption of certain foods; the influence of children on eating; availability of healthy foods; convenience; and the interplay between cost, convenience, taste and healthfulness when choosing foods. More prominent influences among tertiary educated men included using advanced cooking skills but having relatively infrequent involvement in other food-related tasks; the influence of partner/spouse support on eating; access to healthy food; and cost. More predominant influences among non-tertiary educated men included having fewer cooking skills but frequent involvement in food-related tasks; identifying that 'no-one' influenced their diet; having mobile worksites; and adhering to food budgets. Conclusions: This study identified key similarities and differences in perceived influences on eating behaviours among men with lower and higher education levels. Further research is needed to determine the extent to which such influences explain socioeconomic variations in men's dietary intakes, and to identify feasible strategies that might support healthy eating among men in different socioeconomic groups. | Background
Men tend to eat less healthfully than women, eating fewer fruits and vegetables [1][2][3], more red and processed meat [2,4], and greater amounts of processed discretionary foods [3][4][5]. These differences contribute to gender inequalities across a range of adverse health outcomes including obesity [6], diabetes mellitus [7] and coronary heart disease [8].
Socioeconomic inequalities in diet are well established [9][10][11]. Men and women experiencing socioeconomic disadvantage (e.g. those with low education, low income, or residing in deprived areas) tend to have eating behaviours not conducive to good health [9][10][11]. Compared with more advantaged adults, those who are disadvantaged tend to eat fewer fruit and vegetables [9,12], and less fibre [9]. Disadvantaged adults also consume more fat, skip breakfast [9] and eat fast food more frequently [13].
Education was selected as an indicator of socioeconomic position because it is a strong determinant of future occupation and income, reflects knowledge-related assets and other intellectual resources, and has been strongly associated with dietary intake in previous studies [14].
The social ecological model, which recognises that individuals are embedded within larger social systems, provides a useful framework for investigating determinants of behaviour. According to the model, behaviours are determined by the interactions of individuals and their social and physical environments [15]. While correlates of women's eating behaviours are well characterized [16][17][18][19][20], influences on men's eating behaviours are less well understood, and are likely to differ from those that influence women [21,22]. While some sex difference in intakes may be attributable to biological factors, it is likely that a range of other factors at the individual, social and environmental levels are also implicated. For instance, social norms related to masculinity may lead men to perceive that consumption of certain healthy foods, and activities such as meal planning and cooking are feminine [23], and hence 'unmasculine' [24].
Several potential drivers of socioeconomic inequalities in men's eating behaviours have been identified in studies focussed on singular domains of the social ecological model. Intrapersonal factors including nutrition-related knowledge [25][26][27], self-confidence, problem-solving skills and the ability to process information are important for helping individuals overcome obstacles to adopting more favourable eating behaviours [25]. Socioeconomically disadvantaged men may also be less likely to use nutrition information and may also lack the skills or confidence to prepare healthy meals [27,28].
Social norms, particularly those related to masculinity, may also contribute to socioeconomic differences in eating behaviours. Men who endorse dominant norms of masculinity were shown to adopt less optimal eating behaviours than their peers who endorse less traditional norms [23]. Young blue-collar male workers tended to show little consideration for being health-conscious, resulting in consumption of diets high in saturated fats and sugars [29]. Those men's food practices reflected gender identity, with food preparation commonly viewed as "women's work". Blue-collar workers' food choices were also influenced by poor dietary role models, including peers, co-workers, and supervisors [29].
Environmental factors may also explain socioeconomic differences in men's eating behaviours, such as differential access to stores selling both healthy and less healthy foods [30]. Disadvantaged men may be less likely to make optimal food choices due to limited access to affordable nutritious foods within the local environments where they work and live. Danish men with low education believed their weight gain was partly attributable to the types of foods available in their work environment [31]. In New Zealand, the least deprived areas had 76% fewer fast food outlets than the most deprived areas, and fast food outlet exposure was negatively associated with individual-level SEP indicators (highest educational attainment and relative income) [30].
---
Methods
To our knowledge, potential explanations of socioeconomic differences in men's eating behaviours across intrapersonal, social and environmental domains have not previously been investigated simultaneously. Further, as these influences remain unexplored across multiple domains together, such an approach may have yielded a better understanding of the interaction between factors from different domains, as well as potentially identifying factors that may have been overlooked when domains were previously investigated in isolation. How these factors may influence socioeconomic inequalities in eating behaviours among men remains unclear. The present investigation aimed to qualitatively explore potential explanations for socioeconomic differences in men's eating behaviours among men with tertiary and non-tertiary education.
This study is reported according to the consolidated criteria for reporting qualitative research guidelines [32], and was conducted in conjunction with an independent social and market research agency, Market Solutions P/L (http:// www.marketsolutions.com.au/). The agency was selected to assist with the study given their strong track record in conducting social science research [33,34], and their familiarity with qualitative methodology and research, particularly amongst socioeconomically disadvantaged groups. The agency is accredited to the international ISO standard for market, social and opinion research (AS ISO 20252) and is a member of the Association of Market Research Organisations (AMSRO). Market solutions P/L was responsible for recruitment, conducting interviews, recording and transcribing data, and transmitting de-identified data to the study investigators. The study investigators were responsible for all other aspects of the study.
The study was approved by the Deakin University Faculty of Health Human Ethics Advisory Group (HEAG-H; approval HEAG-H 95_2015). All men provided informed, verbal consent to participate. This was recorded by interviewers at the point of first contact with the men in a password protected project database stored on a secure server.
---
Participants
The sample comprised 30 men of working age (18-60 years), 15 with a non-tertiary level education, i.e. completed Year 9 or less, Year 10, Year 11, Year 12 (final year of high school in Australia), or Certificate/Diploma/ Advanced Diploma; and 15 with a tertiary education level (completed tertiary education, i.e. a Bachelor degree or higher) from Melbourne, Victoria, and Newcastle, New South Wales (large metropolitan regions in two Australian states). To reflect SEP in nutrition research, education is often stratified as described above (high SEP is indicated by having achieved tertiary level qualifications, while low SEP is reflected by achieving non-tertiary level qualifications) [35][36][37]. The current qualitative data can be used to generate hypotheses that could be followed up in future research [38]. Education was employed as the measure of SEP in this study as it is a relatively stable indicator of SEP [14,39]. Seven or eight men each with tertiary or non-tertiary education participated from each site. Men of working age were the focus of the present investigation as different factors influencing eating behaviours might be reflected among older men (e.g. those who are retired), given substantial lifestyle changes that come with older age (e.g. income, available time, household structure, health issues [40]).
---
Recruitment procedure
Market Solutions P/L accessed telephone directories of community members in both target catchment areas, including mobile and landline numbers and randomly selected men's numbers to be called by one of three male interviewers (agency employees trained in qualitative methodology). Male interviewers were chosen to maximise the potential to build reciprocity between the interviewer and participant which may yield richer data than may have been gathered by female interviewers [41]. Men were invited to complete a telephone-based interview either immediately or at a more convenient time. Purposive sampling [42] based on educational attainment and city of residence was used to recruit a total of 30 men (15 from each target catchment, and 15 each with tertiary and non-tertiary education).
Interested participants received study information via telephone and were assessed for eligibility (i.e. 18-60 years of age, were tertiary or non-tertiary educated as defined above, and could communicate clearly in English). Men were offered an AUS$20 voucher to a leading retailer as compensation for their time (mailed postinterview).
Semi-structured interview schedule and procedure Development of questions for the semi-structured interviews was informed by the social ecological model [15], and previous research examining determinants of men's eating behaviours [21,23,24,26,28,29,31,[43][44][45][46].
Questions were primarily open-ended and aimed at assessing participants' usual eating behaviours and perceived influences on these (Additional file 1: Table S1). Men were prompted to discuss food task responsibilities; influences on eating behaviours and eating choices (including an exploration of trade-offs between health, convenience, peer modelling, price, accessibility, and taste); body weight; masculinity; social influences; perceptions of other men's eating behaviours (social norms); and neighbourhood availability of healthy foods.
---
Interview procedure
Interviews were conducted by telephone in 2015. A one-on-one telephone interview was chosen as men resided across a wide geographical area making faceto-face interviews less feasible. The interview schedule was pilot tested and refined with the first two men (one with tertiary education from Melbourne, one with non-tertiary education from Newcastle). Piloting showed no major issues with timing or questions, with only minor changes made for clarification. Pilot data were not included in further analyses.
Before commencing, interviewers asked for permission to digitally record the interview, and participants answered sociodemographic questions. Interviews lasted between 25 and 35 min, and once complete, were transcribed verbatim from the recordings.
---
Sociodemographic characteristics
Men provided their age (five response categories ranging from 18-24y to 55-60y) and highest attained level of education (six categories ranging from Year 9 or less to Bachelor degree or higher). Employment status (working full-time/ part-time, studying, unemployed, retired, home duties, or other), annual household income (eight response categories ranging from <AUS$20,000 to ≥AUS$150,000, and including don't know, and refused), household structure (couple with children, couple without children, single parent, single person, or flatmates) and occupation (comprising professional, technician/trades worker, community and personal services worker, manager, clerical and administrative worker, machinery operator/driver, sales worker, labourer, and other) were also established.
---
Data analysis
Qualitative description was used to build a comprehensive understanding of socioeconomic differences in influences on men's eating behaviours. Qualitative description aims to maximise descriptive and interpretive validity by providing an account of events (including meanings participants attribute to those events) that both participants and researchers would agree is accurate [47,48]. This methodology is more appropriate than those requiring a greater degree of researcher interpretation given the goal of the present investigation to discern potential influences on socioeconomic differences between men's eating behaviours [48].
Data were analysed by the lead author (LS) using thematic analysis, which comprised four key steps [49]: immersion in the data, line-by-line coding, creating categories, and generation of themes. LS read and re-read transcribed interviews to build familiarity with the data (data immersion), and then performed abductive thematic analysis [50] to code data using descriptive labels. Categories were formed by linking coded data together that related to similar concepts, while keeping categories for tertiary and non-tertiary educated men separate [49]. Based on these categories, LS identified key emerging themes that were salient for men within each education level group. Individual influences were each classified into a separate theme (e.g. 'cost' , 'convenience' , etc.). Findings were generated via an iterative, abductive cycle, moving back and forth between inductive and deductive reasoning. Where relationships between themes and/or sub-themes were identified, such interactions were classified under the predominant theme that united those factors (e.g. interplay between cost, convenience, taste, and healthfulness of food was described within the 'cost' theme. Of these factors, cost was determined to be most predominant as participants typically described cost before discussing consideration of the other factors).
Rigour was maintained via researcher reflexivity (i.e. ensuring one's own perspectives are left out of the coding process as much as possible), development of an audit trail by recording steps taken in the development and reporting of findings, linking interpretations with the raw data by presenting participant quotes, and peer debriefing with the study's co-authors throughout the analytical process. An independent researcher (non-author) double-coded a subsample of interviews (20%; n = 6, three from each education group). Each coder independently and systematically employed the iterative, abductive cycle described above to create categories from the data. The purpose of double coding was to explore potential alternative interpretations of the data, as the iterative process of cross-checking coding strategies and data interpretation by the researchers enables potential alternative interpretations to be identified and discussed, serving to create a more thorough examination of the data [51]. Data analysis was conducted using raw transcripts entered into NVivo software (version 10, QSR International, Melbourne, Australia).
---
Results
Sociodemographic characteristics of the sample are shown in Table 1. A range of age groups were represented, with the majority aged 45-54 years, and employed in full-or part-time work (80% of non-tertiary educated men, 87% of tertiary educated men). Very few men were studying (n = 2), unemployed (n = 1), or retired (n = 1); and none were engaged in home duties or other forms of employment (data not shown). Most tertiary educated men worked as professionals (77%). Among non-tertiary educated men, 50% worked as technicians and trades workers, 17% worked as managers, and 17% in clerical and administrative roles. Only one man was employed as a machinery operator/driver, and none were employed as sales workers, labourers, or in other roles (data not shown).
Major emerging themes and exemplary quotes are presented below, with results presented stratified by education level. Themes found to be equally prominent across both groups of men included the intrapersonal-level influences of attitudes relating to masculinity, nutrition knowledge and awareness, and 'moralising' consumption of certain foods; and social influences of children. Environmental themes discussed by both groups included availability of and access to healthy and unhealthy foods; convenience; and the interplay between cost, convenience, taste and healthfulness when choosing foods (discussed within the cost theme).
Intrapersonal influences more frequently discussed by tertiary educated men within themes identified included having greater food-related skills (e.g. cooking involving multiple, complex steps), but less involvement in foodrelated tasks (e.g. menu-planning, purchasing) because of time constraints. Almost all tertiary educated men with partners identified their partners as a positive influence on eating behaviours. Environmental influences more dominant among tertiary educated men included accessibility of healthy foods; and perceiving healthy foods as expensive and unhealthy foods as inexpensive.
A number of influences within themes were more frequently discussed by non-tertiary educated men, including having less developed cooking skills but regular involvement in food-related tasks such as shopping, preparing, and cooking meals when compared to discussion by tertiary educated men. While men from both groups recognised nutrition knowledge as an influence on their eating behaviours, non-tertiary educated men reported lower perceived levels of nutrition knowledge, and sometimes described misperceptions related to nutrition and body weight. A theme identified only among non-tertiary educated men was the perception that no-one influenced their eating behaviours. Non-tertiary educated men also identified mobile worksites (i.e. moving from one work location to another during the day/week as necessitated by their job, common among those working as tradesmen) as an unhealthy influence on eating, and discussed the need to adhere to a food budget.
---
Intrapersonal influences
Intrapersonal influences included attitudes related to masculinity; food-related tasks and skills; nutrition knowledge and awareness, and moralising consumption of certain foods.
---
Attitudes related to masculinity
Men from both educational groups reported that they did not believe that preparing and consuming healthy food were negatively associated with principles of masculinity, but rather were important for good health. Some tertiary educated men thought perceptions that it was unmasculine to eat healthfully had become less common over time, while others from both groups thought eating healthfully actually enhanced masculinity.
"Things have changed. It might just be a reflection of my own friends, but I think a lot of guys I know cook more and want to eat a greater range of foods. I think there is a change where guys are picking up more responsibility at home." Tertiary educated man.
"I tend to think if you eat healthy it would give you a greater sense of masculinity from a male point of view." Non-tertiary educated man.
---
Food-related tasks and skills
Food-related tasks and skills were discussed as an influence on men's eating behaviours by almost all men from both education groups. Tertiary educated men reported taking part in meal planning, food purchasing and preparation (although to a lesser degree than non-tertiary educated men), and adding extra vegetables to a dish to make it healthier. Some non-tertiary educated men described themselves as expert cooks, while others felt they had sufficient skills to put simple meals together. Both groups of men also frequently prepared their lunch for work.
"Tonight I've got leftover pasta… I just added frozen peas and some fresh asparagus, which I just boiled quickly and I added it in..." Tertiary educated man.
"If I do prepare a meal I might make myself some bacon and eggs on toast or I might make myself a burger if the materials are here at the time." Nontertiary educated man.
Men from both groups identified several reasons for cooking, including sharing the workload with their partner or spouse and/or because they enjoyed cooking. A few of the non-tertiary educated men described eating at home because it was cheaper to cook at home than to eat out. Some non-tertiary educated men also described sharing the food preparation workload due to time constraints, such that whoever in the household arrived home earliest after work, or had more time, did the cooking.
"[Dinner time is] the time that my wife sort of works a bit later and I'm working days and I've got time to cook 'til she comes home… I like the taste and I like experimenting with cooking and making a few different things." Non-tertiary educated man.
---
Nutrition knowledge and awareness
Men from both groups were aware of the importance of eating healthfully and thought people, particularly other men, were far more aware of the importance of eating healthfully than in the past, and that awareness was continuing to grow over time.
" Both groups of men considered healthfulness when making food choices, with many choosing foods specifically because they felt they were healthy. Nutrition knowledge was not determined by skill-testing questions and men were not asked to directly compare their knowledge to other men's knowledge, however, non-tertiary educated men perceived that they had lower nutrition-related knowledge than men with a tertiary education, and sometimes described misperceptions related to nutrition and body weight.
"Steaks are probably... better for me than any of the other fatty food. Even with sausages sometimes they can be real fatty where at least I know if a steak's done properly there's not much chance of a lot of fat still being inside of it". Non-tertiary educated man.
"My understanding is that fat is only stored to a point and then your body won't take anymore. What we assume is eating too much fat is actually carbohydrates stored as fat… In actual fact [people] are not fat. They're just carrying an enormous amount of carbohydrate that they're not using." Non-tertiary educated man.
---
'Moralising' consumption of certain foods
Men in both groups moralised consumption of certain foods based on their perceived healthfulness, particularly snack foods. In 1999, Rozin described moralisation as the act of accreting moral value to activities or objects (such as food) that were previously without moral value [52]. Moralising food consumption can be regarded as translating food judgements into corresponding behavioural rules. For example, men associated choosing 'good' food with good health or high self-control, while 'bad' food choices were linked with poor health and low self-control. Such food judgements can be taken further to imply that certain food choices are righteous/sinful, or moral/immoral [53]. Men in both groups often described healthy food as 'right' or 'sensible' , while consumption of unhealthy foods was construed negatively, associated with feelings of guilt, or viewed as 'terrible'. "I always favour seafood because I tend to think it's a more sensible choice… I think seafood's invariably a healthy choice..." Non-tertiary educated man.
---
Social influences
Social influences on eating behaviours identified included the influences of partners/spouses and children, and the perception that no-one influenced eating behaviours.
---
Partners and spouses
Among those men with partners, more of the tertiary educated men than those with non-tertiary education described believing that their partner had a healthy influence on their eating behaviours. In the majority of cases partners' main mechanism of influence was by acting as gatekeepers of the home food environment by controlling the healthfulness of foods purchased, and preparing nutritious meals. Some tertiary educated men also thought their partners also verbally encouraged them to eat healthfully, or that their partner was a healthy role model.
"[My wife helps me eat more healthfully]… by positive reinforcement, by actively seeking and assisting in healthy choices, healthy recipes and healthy food" Tertiary educated man.
---
Children
Among both groups of men, most who had children thought their children influenced them to eat healthfully.
A number of fathers described choosing healthier foods in order to make them available to their children, as well as to role-model healthy eating for their children.
---
No-one influences eating behaviour
Several non-tertiary educated men stated they did not believe anyone else exerted influence on their eating behaviours, despite many of these men having partners and/or children. This view was not identified by tertiary educated men.
---
Environmental influences
Environmental influences identified by both groups of men included availability of, and access to, healthy and unhealthy foods as well as convenience and cost.
---
Availability of and access to healthy and unhealthy foods
All men discussed availability of, and access to, healthy and unhealthy foods at home, work, and in the local neighbourhood as affecting food choice. Almost all men from both groups felt healthy food was readily available (e.g. where they did their weekly grocery shopping); and accessible in the local neighbourhood (e.g. at local markets and supermarkets that could be reached either on foot or by car in a short amount of time).
Tertiary educated men thought access to particular foods increased the likelihood those foods would be eaten, therefore ready access to healthy foods would result in eating more healthfully in general. A few non-tertiary educated men also chose foods at home, particularly snack foods, simply because they were readily accessible.
"If I'm in the right frame of mind when I'm shopping I'll buy better things… I'll buy more vegetables and more fruit… And if I buy it, I eventually will eat it. I don't like wasting stuff… Just making sure that you buy more fruit and vegetables than you think you need… because they're there, you can think of things you can do with them." Tertiary educated man.
"[For snacks, I eat] anything I can get my hands on really. I'm a bit of a human garbage disposal, so there's fruits and biscuits and nuts and whateverchocolate. Anything I can get. Chips. Anything I can get a hold of. Anything in front of me." Non-tertiary educated man. Some non-tertiary educated men had mobile worksites, and so work lunch choices were influenced by what was available in the neighbourhood surrounding their workplace, i.e. they purchased food wherever they were located for a job.
"Not an actual workplace cafeteria. I'm self-employed. I'm sort of all over the place so it'd be just like a shop [where I buy my lunch when at work]. Yeah, just whatever's closest." Non-tertiary educated man.
---
Convenience
Almost all men from both groups cited convenience as a major influence of food choice, selecting foodsparticularly breakfast and lunch foodsbecause they were close to hand, and quick to purchase and consume. Among men who purchased work lunches, several from both groups considered the convenience and time it took to access food influenced their choices, often leading to less healthy food purchases. "There's always a lot more temptation to eat junky food [for work lunch], because it's really easy and it's there, and it's just about everywhere that you go. You can just grab it and eat it, you don't have to think about it. And I've noticed if you have to wait and think about it, you generally change your mind." Tertiary educated man.
---
Cost
Cost influenced men's food choices. All tertiary educated men considered cost when choosing food, and the perception that healthy food was expensive was prominent among tertiary educated men, but not among non-tertiary educated men. Tertiary educated men thought the cost of healthy food was prohibitive when doing the grocery shopping, and unhealthy food items available in supermarkets were often cheap, or on special. "It's so much easier, in particular this country, to buy cheap take-away than it is to buy what's often not so cheap healthy food and then do the groundwork of preparing. It's easier and often cheaper... You walk into a supermarket and you're going to pay AUS$3.00 for a bottle of [high-calorie beverage] and AUS$3.50 for a bottle of water. How is that possible?" Tertiary educated man. Almost all non-tertiary educated men also considered price when choosing foods, with some households having to stay within a budget when they shopped for food.
"Generally [we cook] the cheaper cuts of meat, mince and sausages… because we're on a budget." Nontertiary educated man.
Men from both groups talked about considering cost along with other influences when choosing foods. Consistently, the interplay between cost, convenience, taste, and healthfulness of foods were considered together before a choice was made. Among men from both groups, those who prioritised health tended to consider cost as a secondary influence after health, followed by convenience, with taste being less important; among those who did not prioritise health, cost and convenience were more important over health and taste considerations.
"Probably convenience, cost and health would be the main three [influences to consider when choosing lunch] for me. It's just with my work and home life, [having a] schedule where we're home, with the little one at lunch time [and] she's having a sleep during my lunch [I choose what is convenient], and then other times cost. It's more cost effective for me to take [my lunch to work with me], something that I like to eat rather than have to pay $8 for a salad roll when I can make one and bring one from home and don't have to go looking for it as well." Tertiary educated man.
" [Food] definitely has to be filling because the price of food these days out is usually expensive. Definitely filling… You need to be content. You don't want to have one hot dog and go, 'Gee, I'm still hungry.' At the end of the day you might get to a place and there's only two options [available]. So you look at that and convenience, what's easy, what's simple. Price does come into it. Again, it's hard to judge because everything that you buy these days is pricey anyway." Non-tertiary educated man.
---
Discussion
The present investigation aimed to examine potential explanations of socioeconomic differences in men's eating behaviours by qualitatively exploring influences on eating among men of tertiary and non-tertiary education levels. Salient themes among men from both education groups included influences from intrapersonal, social, and environmental domains. Influences more predominant among tertiary educated men included having more advanced food-related skills but relatively less involvement in food-related tasks compared non-tertiary educated men; partner/spouse support for healthy eating; access to healthy foods; and views relating to food cost. Prominent influences among men with non-tertiary education levels included having limited cooking skills (e.g. being able to prepare simple dishes with few steps and uncomplicated techniques) but more frequently being involved in food-related tasks, and perceiving having limited nutrition knowledge when compared with discussion by tertiary educated men. These men also identified more often that no-one influenced their diet; they had mobile worksites; and adhered to a food budget.
Neither group perceived food preparation or healthy eating to be at odds with the concept of masculinity, a finding which is divergent with those of previous studies that showed men, irrespective of education level or occupation, considered healthy eating as feminine [21,54,55]. It may be that with increasing global recognition of the importance of diet for chronic disease prevention, eating for good health has become more acceptable and normative among men since those earlier studies were published. Men's perceptions about masculinity described in the present investigation may also be attributed to workforce and societal changes in women and careers, with fewer men being the family's primary income provider, and with fewer women staying home to perform all food-related tasks than previously. Further, the majority of participants in the present investigation were aged ≥45 years, and may have greater awareness of the importance of health behaviours as they age and face increased risk of diet-related disease.
When discussion about food-related tasks and skills was examined, tertiary educated men's cooking skills were more developed, but they had less involvement in food-related tasks than non-tertiary educated men who had more limited cooking skills but regular involvement in food-related tasks. These findings correspond with those reported previously. For example, low income US men were nearly three times more likely to be involved in meal planning and preparation compared to their wealthier counterparts [56], and Norwegian men working in blue collar occupations (carpenters) were more likely to share food shopping and preparation with their partner/spouse compared to men in white collar occupations (engineers) [57]. Consistent with our findings regarding education level and cooking skills, when selfdescribed cooking skills were compared between Swiss men, those with high education levels had more elaborate cooking skills than less educated men [58].
Social influences on men's eating behaviours included those in their family unit (i.e. partner/spouse, and/or children), or, as for several non-tertiary educated men, no other individuals. Partner/spousal support for healthy eating was recognised as important by tertiary educated men in our study, but not among those with non-tertiary education. Conversely, low income British men previously identified female figures (e.g. spouses/ partners, mothers, grandmothers) as positive influences on their eating behaviours [59]. Similarly, Dutch men with lower vocational education or below stated they would eat healthfully if their spouse/partner did [60]. A previous Australian nutrition and physical activity intervention incorporating social support by partners resulted in significant decreases in total and saturated fat consumption, and significant increases in fibre intake among men and women [61], implying that greater social support from spouses/partners would encourage men to eat more healthfully. It is unclear why our findings diverged from these previous studies, however it may simply be a function of studying different samples. Fathers from both education groups acknowledged the importance of role-modelling healthy eating for their children, and how this encouraged their own healthy eating. Previous research showed that Australian children's total fruit consumption was positively associated with that of their father [62], and thus supports observations in the present investigation. That some non-tertiary educated men in the present investigation thought no-one influenced their diet was novel, and contradictory to previous research suggestive that social support for healthy eating encouraged less educated or low income men to adhere to healthier eating behaviours [59,60]. On balance, findings from the present investigation and previous research suggest that role-modelling and social support are important factors for supporting men to eat healthfully, and have the potential to be powerful mechanisms through which improvements in men's diets could be achieved if incorporated into future nutrition promotion initiatives, for example, engaging men along with their partners in intervention strategies including nutrition education and cooking classes.
Tertiary educated men in our study considered healthy foods to be expensive; however, although non-tertiary educated men reported having to adhere to a food budget, they did not generally describe healthy foods as expensive. Potential explanations for this paradoxical finding could be that only six of the non-tertiary educated men had low income and may have been able to afford healthy foods. However, previous research among socioeconomically disadvantaged men showed they did not consider healthy foods prohibitively expensive [59,60]. The present investigation also revealed that men chose foods by considering a number of influences in conjunction at multiple socioecological levels (e.g. cost, taste, etc.). The observed interplay between influences on men's eating behaviours implies multiple factors shape men's dietary behaviours. It also suggests employing a qualitative approach to explore influences on men's eating behaviours across the domains of the social ecological model in unison, such as employed in the present investigation is advantageous. This can yield a deeper understanding of how influences across domains interact and can be utilised in future to further inform research and interventions aimed at improving men's eating behaviours.
Factors identified as potential influences on socioeconomic inequalities in men's diets in this study need confirmation in larger samples using quantitative methods. Acknowledging this, the present investigation has elucidated key levers that could, if confirmed, be targeted in initiatives aimed at reducing inequalities in eating behaviours, in turn ameliorating the socioeconomic 'gap' and adverse health and economic outcomes associated with these inequalities. For example, strategies to promote healthy eating among non-tertiary educated men could focus on developing greater nutrition knowledge, improving cooking skills, identifying key social supports for healthy eating, and providing skills and strategies to purchase healthy foods, particularly whilst at work, whether at a fixed or mobile worksite, and on a budget. Strategies that could support tertiary educated men to eat healthily could include promoting greater involvement in food-related tasks and education about choosing low cost healthy foods. Previous programs incorporating some strategies identified above have successfully promoted healthy eating among women and men [63] including those experiencing socioeconomic disadvantage [64]. However, given challenges in engaging men in such programs [65], policy and practice should not only focus on developing nutrition promotion initiatives aimed at improving men's diet that are custom-made to specific socioeconomic groups, but also incorporate specific tailoring to engage men.
Study limitations should be acknowledged. Participating men may have been more interested in nutrition and health than non-participants, resulting in possible participation bias. Transferability of findings may be limited by a single measure of SEP being used to define the sample. Almost all participating men were employed, and had professional occupations; and only half of non-tertiary educated men had low incomes. More sensitive measures of education (beyond the binary categorisation applied in the present investigation) could be considered in future research. Further, SEP is not determined by education alone, and is only one of many possible measures, when in fact SEP is best described in a more complex way by considering multiple factors such as income, education, and occupation simultaneously, not singly. As no data about men's ethnicity or culture were gathered in the present investigation, it was not possible to make any observations about possible cultural variations in views between men. Exploring cultural differences in conjunction with socioeconomic differences may be considered in future. Also, as more than half of participants were aged 45-54 years, the generalisability to men of other age groups may be limited. Men who identified as having a partner were not asked to disclose the sex of their partner. It is unclear if study findings would vary whether the couple was same-sex or opposite-sex, and is therefore acknowledged as a limitation. Nevertheless, qualitative studies do not intend to focus on general sample representativeness, but rather aim to generate a range of responses and hypotheses for potential follow up in future research [38].
Men may also have provided socially desirable responses, such as stating they had more favourable eating behaviours than in reality, yet participants also identified challenges faced in consuming healthy foods and openly discussed barriers to doing so, suggesting that socially desirable responses were minimised. Further, participants' responses might have been influenced by being interviewed by another male; views presented may have inadvertently been driven by participants' perceptions of shared masculine identity with, or reciprocal enactment of masculinity by the male interviewer, and consequently resulting in a more idealised cultural notion of masculinity [41]. However, as this was not reflected in the responses observed (e.g. healthy eating was not perceived to be unmasculine), the use of male interviewers here could be interpreted as a strength as there may have been reciprocity between the interviewer and interviewee, resulting in richer data than may have been gathered by female interviewers [41]. Also, using a one-on-one telephone interview methodology may have reduced some response bias as participants may have been less affected by cues from facial expressions or perceived social desirability from the researcher, e.g. in face-to-face interviews, or other participants, e.g. in a focus group setting [66,67]. While using a telephone method also has disadvantages, including lack of visual cues and difficulty building rapport [68], this method was deemed necessary as participants were recruited across a wide geographical area. Finally, data analysis occurred after data collection was complete, and therefore emerging themes could not be checked during the data collection process.
Study strengths include the qualitative design which provided in-depth, comprehensive insights into socioeconomic differences in influences on men's eating behaviours, with perspectives provided by men living in two regions of Australia, drawn from different educational strata. A further notable strength of the study is that it provided unique insights into men's eating behaviours overall, irrespective of SEP.
---
Conclusions
To conclude, the present investigation provided insights into individual, social and environmental influences on the eating behaviours of men with divergent education levels, expanding the knowledge base around this important topic. Key potential drivers of socioeconomic inequalities in men's eating behaviours were identified, with potential to inform novel strategies to encourage men to eat healthfully. Future quantitative research is required to examine how factors identified in the present investigation are associated with men's dietary intakes across socioeconomic strata; how they might explain socioeconomic differences in men's diets; and the feasibility of adopting various strategies to support healthy eating among men in different socioeconomic groups.
drafted the manuscript; all authors contributed to revising the manuscript; LDS had primary responsibility for final content. All authors read and approved the final manuscript.
---
Availability of data and materials
The dataset generated and analysed during the present investigation are not publicly available due to ethics requirements to maintain confidentiality but are available from the corresponding author on reasonable request.
---
Additional file
Additional file 1: Table S1. Semi-structured interview questions investigating ns on men's eating behaviours. Summary of semi-structured interview questions used in the present investigation. (DOCX 27 kb) Abbreviation SEP: Socioeconomic position Authors' contributions DC, LT, and KB designed the research; DC, LT, DLO, PJM, FJvL, and KB developed measures; LDS performed data analyses; LDS, DC, DLO and KB
---
Ethics approval and consent to participate
The study was approved by the Deakin University Faculty of Health Human Ethics Advisory Group (HEAG-H; approval HEAG-H 95_2015). All men provided informed, verbal consent to participate. The ethics committee approved the procedure for verbal consent, and waived the requirement for written consent to reduce the participant burden associated with obtaining consent in written form.
---
Consent for publication Not applicable
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Parent-child acculturation discrepancy has been considered a risk factor for child maladjustment. The current study examined parent-child acculturation discrepancy as an ongoing risk factor for delinquency, through the mediating pathway of parental knowledge of the child's daily experiences relating to contact with deviant peers. Participants were drawn from a longitudinal project with 4 years between data collection waves: 201 Chinese immigrant families participated at Wave 1 (123 girls and 78 boys) and 183 families (110 girls and 73 boys) participated at Wave 2. Based on the absolute difference in acculturation levels (tested separately for Chinese and American orientations) between adolescents and parents, one parent in each family was assigned to the "more discrepant" group of parent-child dyads, and the other parent was assigned to the "less discrepant" group of parent-child dyads. To explore possible within-family variations, the mediating pathways were tested separately among the more and less discrepant groups. Structural equation modeling showed that the proposed mediating pathways were significant only among the more discrepant parent-adolescent dyads in American orientation. Among these dyads, a high level of parent-child acculturation discrepancy is related to adolescent perceptions of less parental knowledge, which is related to adolescents having more contact with deviant peers, which in turn leads to more adolescent delinquency. This mediating pathway is significant concurrently, within early and middle adolescence, and longitudinally, from early to middle adolescence. These findings illuminate some of the dynamics in the more culturally discrepant parent-child dyad in a family and highlight the importance of examining parent-child acculturation discrepancy within family systems. | Introduction
Although Asian American adolescents are commonly perceived to be model minorities, there has been a growing concern about delinquent behaviors in this group. Indeed, studies have found that Asian American adolescents are at least as likely to engage in delinquency (e.g., graffiti painting, shoplifting or stealing a car) as their European American counterparts (Choi and Lahey 2006;Willgerodt and Thompson 2006). The literature on this topic suggests that Asian American adolescents' delinquent behaviors are tied to the challenges of adapting to life in the US, such as dealing with family and peer relationships in potentially conflicting mainstream and heritage cultures (Le 2002). Thus, it is important to consider the psychosocial predictors of delinquency, such as acculturation, in order to inform future intervention efforts.
Acculturation refers to a process through which immigrants gradually adapt their language, behaviors, beliefs, and/ or values as a result of contact with the mainstream culture (Yoon et al. 2011). A significant body of work has shown that discrepancy in acculturation levels between parents and children is a significant risk factor for child maladjustment, as indicated by decreased academic performance, depression, and delinquency (Costigan and Dokis 2006a;Kim et al. 2009;Unger et al. 2009). Longitudinal research on this link's underlying mechanism, however, is limited. Also limited are studies examining within-family variations in the effects of parent-child acculturation discrepancy on child maladjustment. The present study explores how parents' knowledge of children's daily experiences (as perceived by the adolescents) and adolescents' association with deviant peers, two important constructs related to delinquency, operate sequentially to mediate the relationship between parent-child acculturation discrepancy and adolescent delinquency in an understudied population of adolescents in Chinese immigrant families. Within each family, the mediating pathway is tested separately for two groups of parent-adolescent dyads: those that are more discrepant in their acculturation levels, and those that are less discrepant.
---
Parent-Child Acculturation Discrepancy as a Risk Factor for Adolescent Delinquency
Acculturation is a bi-dimensional construct, consisting of orientations toward two cultures, heritage and mainstream, which are independent from each other (Ryder et al. 2000). Children of immigrants tend to be more acculturated to the mainstream culture, while immigrant parents tend to be more oriented toward their heritage culture (Portes and Rumbaut 1996). Although the alternate scenario occurs with less frequency, some immigrant parents are more acculturated to the mainstream culture, while their children are more oriented toward the parents' heritage culture (e.g., Lau et al. 2005). Regardless of direction, however, discrepancies in family members' acculturation levels have been linked to externalizing behaviors in children, such as substance use in Latino youth (Unger et al. 2009), conduct problems in Mexican youth (Lau et al. 2005), and violence in Asian American youth (Le and Stockdale 2008). This might be due to the fact that as long as parents and children have discrepant beliefs, values and behaviors, family functions are likely to be disrupted. Birman (2006) found that parent-child acculturation discrepancy leads to family disagreement regardless of the direction of discrepancy. Therefore, although the current study controls for the direction of discrepancy, any form of acculturation discrepancy is considered to have a similar effect on adolescent adjustment.
One limitation of previous studies on this topic is their tendency to rely on concurrent data and to examine only direct correlational relationships between acculturation discrepancy and adolescent delinquency. By using longitudinal data, the current study takes into account the temporal ordering of variables to test the long-term effect of parent-child acculturation discrepancy on youth delinquency and to explore in greater depth the underlying mechanisms of this relationship.
---
Parental Knowledge and Deviant Peers as Potential Mediators
Parent-child acculturation discrepancy in immigrant families disrupts family functioning by increasing the incidence of miscommunication and misunderstanding (Hwang 2006). Theories on communication have highlighted the disruptive effect of not sharing beliefs, values and behaviors; people with divergent points of view can experience difficulty gaining information from each other (Berger and Calabrese 1975). Similarly, parents and children who hold culturally discrepant beliefs, values and behaviors may be discouraged from communicating and interacting effectively. Therefore, adolescents may come to feel that their parents do not know or understand their daily activities, whereabouts and companions.
The extant literature does not directly examine the link between parent-child acculturation discrepancy and perceived parental knowledge. However, previous research does provide some support for such a link. For example, using generational status as a proximal measure of acculturation, Tasopoulos-Chan et al. (2009) found that second generation Chinese American youth more frequently avoided discussing their activities with their parents than did first generation Chinese American youth. In a case study of Chinese immigrant families, Qin (2006) found that both parents and children report that parents do not know about their children's friends and school activities and children do not tell their parents about their experiences due to the fact that parents and children adhere to the heritage and mainstream cultures to different degrees. Weaver and Kim's study (2008) on Chinese American families, which used parent and adolescent reports of parental knowledge as one of several indicators of supportive parenting, suggested that a high level of parent-child acculturation discrepancy may be related to less parental knowledge about children's whereabouts, companions, and bedtime. Therefore, it is possible that in families with a high level of acculturation discrepancy between parents and children, adolescents perceive a lack of parental knowledge. In addition, within a two-parent immigrant family, the child may perceive that the parent who is more culturally discrepant knows less about the child's activities, whereas the other parent, whose acculturation level more closely matches that of the child, knows more.
Parental knowledge has been consistently connected to fewer adolescent problem behaviors because such knowledge reduces the likelihood that the child will affiliate with deviant peers (for a review, see Crouter and Head 2002). Although this link between parental knowledge and adolescent delinquency has been demonstrated in the literature, few studies on immigrant families have examined the factors that set this process in motion. Parent-child acculturation discrepancy may be an ongoing obstacle for immigrant parents when it comes to obtaining knowledge about their children, which in turn places adolescents at risk for affiliating with deviant peers and engaging in delinquent behaviors.
---
Within-Family Variations on the Hypothesized Model
Studies on parent-child acculturation discrepancy usually sample only one parent within a family, even though two-parent families are the most common family form in the immigrant population (Hernandez 2004). Examining the effect of parent-child acculturation discrepancy without considering the family context may yield inconclusive results, as the dynamics in each of the parent-child dyads within a family are interdependent (Costigan 2010;Minuchin 1985). For example, an acculturation discrepancy with one parent may not influence family functioning if there is a great deal of tension between the child and the other parent. Within a family, there are likely to be differences between parents in terms of how similar their acculturation level is to that of their child. In fact, Costigan and Dokis (2006b) found that father-and mother-child acculturation discrepancy differed significantly from each other in both Chinese and American orientations. Thus, the two parent-child dyads within a family can be categorized as the dyad with a greater acculturation discrepancy versus the dyad with a smaller acculturation discrepancy. A contrast effect is likely to take place: acculturation discrepancy in the less discrepant parent-child dyad becomes less important, whereas acculturation discrepancy in the more discrepant dyad becomes more problematic. Indeed, literature on social judgment suggests that one's evaluation of a target is based on its relative characteristics-that is, in comparison to the reference, whatever the reference might be (Mussweiler 2003). Therefore, the link between parent-child acculturation discrepancy and adolescents' perceptions of lack of parental knowledge may be stronger among more discrepant parent-child dyads than it is among less discrepant dyads.
---
Control Variables
Several control variables are theoretically related to the main study variables of parent-child acculturation discrepancy, perceived parental knowledge, adolescents' contact with deviant peers and delinquency. First, the current study controls for family income and parental education level, as the risks posed by parent-child acculturation discrepancy may be especially strong in families in which parents have fewer resources (Portes and Rumbaut 1996). Second, parent gender is controlled, as some parent and child characteristics (e.g., maternal working hours and children's temperament) are more consistently related to paternal knowledge than they are to maternal knowledge (Crouter et al. 1999). Third, empirical studies have found that second-or later-generation adolescents engage in more delinquent behaviors than their first-generation counterparts (Choi and Lahey 2006), and that boys engage in more delinquent behaviors than do girls (Moffitt et al. 2001). In addition, delinquent behaviors tend to increase from early to middle adolescence (Moffitt 1993). Therefore, the present study also includes adolescents' generational status, sex and age as control variables. We also control for the direction of parent-child acculturation discrepancy and whether the more/less discrepant designation remains the same across waves.
---
Present Study
The present study is part of a longitudinal project on Chinese immigrant families. Data were collected first when children in these families were in their early adolescent years (middle school), and again when they were in their middle adolescent years (high school). The current study has two aims. First, we examine the proposed mediating pathways separately among the more and less discrepant parent-adolescent dyads. We hypothesize that parentchild acculturation discrepancy will be related to adolescents perceiving that their parents know less about their daily experiences. The perception of less parental knowledge will be associated with adolescents affiliating with more deviant peers, which in turn will be related to adolescents engaging in more delinquent behaviors. Second, we compare model paths between more and less discrepant dyads. We hypothesize that model paths may be stronger for more discrepant dyads than they are for less discrepant dyads.
The conceptual model to be tested is shown in Fig. 1, which depicts both concurrent and longitudinal paths between model constructs. Concurrent relationships from parent-child acculturation discrepancies to parental knowledge to adolescent delinquency are tested among all Wave 1 variables as well as among all Wave 2 variables (a paths). Data on deviant peers were collected only at Wave 2, and thus are tested as a Wave 2 construct only. Auto-regressive influences are controlled through paths of the same constructs across waves (b paths). In addition, cross-lagged paths are specified for distinct constructs from Wave 1 to Wave 2 (c paths). Alternative cross-lagged paths (d paths) are also specified to test for a potential alternative causal direction of the proposed relationships in the model.
---
Method Participants
Participants were drawn from a two-wave longitudinal study conducted in Northern California. Immigrant parents in the current study hail from mainland China, Hong Kong and Taiwan. As the study targets both parents in a family, all families have two foreign-born parents who are married to one another, both of whom participated in the study. The current sample consists of 201 families in the first wave and 183 in the second wave. Adolescents were between 12 and 15 years of age (M = 13.0, SD = 0.71) at Wave 1, and 16-19 years of age (M = 17.0, SD = 0.72) at Wave 2. Females accounted for 61.2% of the adolescent sample at Wave 1 and 60.1% at Wave 2. Median family income was in the range of $30,001-$45,000 at Wave 1 and $45,001-$60,000 at Wave 2. Median education level was high school graduate for both fathers and mothers across waves.
---
Procedure
At Wave 1, participants were recruited from seven middle schools in major metropolitan areas of Northern California. With the aid of school administrators, Chinese American students were identified, and all eligible families were sent a letter describing the research project. Participants received a packet of questionnaires for the mother, father, and target child in the household. Participants were instructed to complete the questionnaires alone and not to discuss answers with friends and/or family members. They were also instructed to seal their questionnaires in the provided envelopes immediately following completion of their responses. Within approximately 2-3 weeks after sending the questionnaire packet, research assistants visited each school to collect the completed questionnaires during the students' lunch periods. Of the 47% of families who agreed to participate, 76% returned surveys. Approximately 79% of families participating at Wave 1 completed questionnaires at Wave 2. At each wave, the entire family received nominal compensation ($30 at Wave 1 and $50 at Wave 2) for their participation. Questionnaires were prepared in English and Chinese. The questionnaires were first translated to Chinese and then back-translated to English. Any inconsistencies with the original English version of the scale were resolved by bilingual/ bicultural research assistants with careful consideration of culturally appropriate meaning of items.
Attrition analyses were conducted to compare whether demographic variables differed between families that participated at only one wave and those that participated at both waves. Only adolescent sex was marginally significantly related to attrition: boys were more likely to have dropped out than girls (χ 2 (1) = 3.86, p = .051).
---
Measures
Acculturation-The Vancouver Index of Acculturation follows the bi-dimensional model of acculturation and was developed for use with Chinese Americans (Ryder et al. 2000). Using a scale ranging from (1) "strongly disagree" to (5) "strongly agree," mothers, fathers, and adolescents responded to 10 questions about their American orientation and 10 questions about their Chinese orientation. Questions asked about a range of generic behaviors without listing specific traditions or attitudes (e.g., "I often follow Chinese cultural traditions"). The American orientation items were the same as the Chinese orientation items, except that the word "Chinese" was changed to "American." Only those items that conformed to the common factor structure across informants and waves were used (Kim et al. 2009). Across informants and waves, the internal consistency was high for both orientations (α = .76-.82).
Parental Knowledge-Parental knowledge was assessed through a measure adapted from the Iowa Youth and Families Project (Ge et al. 1996). Using a scale ranging from (1) "never" to (5) "always," adolescents rated three items on parents' knowledge of adolescents' daily activities (e.g., "During the day, does your parent know where you are and what you are doing?"). Across waves, the internal consistency was acceptable (α = .62-.74).
Deviant Peers-Adolescents reported on their association with deviant peers at Wave 2 only, using an abridged 7-item version of a peer deviance measure previously used with Asian American adolescents (Le and Stockdale 2005). Adolescents rated the proportion of their close friends who had exhibited problem behaviors (e.g., gone joyriding) during the past 6 months using a scale ranging from (1) "almost none" to (5) "almost all." The internal consistency was high (α = .83).
Delinquent Behaviors-Delinquent behaviors were assessed through measures adapted from the "rule-breaking behaviors" subscale of the Child Behavior Checklist (Achenbach 2001). One additional item, "is part of a gang," was added. Using a scale ranging from (0) "not true" to (2) "often true or very true," adolescents rated their own problem behaviors during the past 6 months. Two items ("feel guilty after doing something I shouldn't do" and "would rather be with older kids than kids my own age") were dropped from factor analysis due to low factor loading. The internal consistency was between .57 to .60 across waves. Given the low levels of delinquent behaviors reported, each delinquent behavior was dichotomized, such that a score of 0 reflected no delinquent behavior and a score of 1 indicated delinquent behavior, whether occasional or frequent.
Control Variables-Fathers and mothers reported on their family income before taxes and highest level of education attained. Family income was assessed using a scale ranging from (1) "below $15,000" to (12) "$165,001 or more." The highest level of education attained by parents was assessed using a scale ranging from (1) "no formal schooling" to (9) "finished graduate degree (e.g., Master's degree)." Adolescents also reported their age, sex, whether they were foreign-or US-born and whether their parents were married to one another.
---
Conceptualizing More/Less Discrepant Parent-Child Dyads
Acculturation scores of adolescents and parents were first standardized. The parent-child discrepancy score was the absolute value reached by subtracting the standardized parent score from the standardized adolescent score. The discrepancy scores of the two parentadolescent dyads in the same family were then compared with each other. The dyad with a higher discrepancy score was assigned to the more discrepant group, whereas the dyad with a lower discrepancy score was assigned to the less discrepant group. These designations were done separately for each wave and separately for Chinese and American orientations. For the entire sample, there were slightly more father-adolescent dyads (50.8-54.2%) than mother-adolescent dyads (49.2-45.8%) placed in the more discrepant group for all the designations. This issue was addressed by controlling for parent gender as a covariate in the following analyses.
---
Results
---
Analyses Plan
Data analyses proceeded in three steps. First, we conducted descriptive and correlational analyses for model constructs and control variables. Second, we tested our first hypothesis on the mediating pathway separately among more and less discrepant groups using structural equation modeling. We examined the hypothesized paths depicted in Fig. 1 and the indirect effects from parent-child acculturation discrepancy to adolescent delinquency. Third, we tested our second hypothesis on the difference between more and less discrepant groups. We conducted invariance tests to compare the strength of the model parameters for more and less discrepant dyads. All the steps were conducted separately for Chinese and American orientations.
---
Descriptive Statistics and Correlational Analyses Among Model Constructs
Table 1 displays the descriptive statistics for the raw scores from participants' original reports. Tables 2 and3 display the descriptive statistics and correlations among the study variables for models involving Chinese and American orientations, respectively. Consistent with the hypotheses, concurrent relationships and auto-regressive relationships between model constructs are generally significant. One notable exception is that parent-child acculturation discrepancy is significantly correlated with parental knowledge only among the more discrepant parent-adolescent dyads in American orientation. In addition, only two cross-lagged relationships are significant among the more discrepant parent-adolescent dyads in American orientation: a high level of parent-child acculturation discrepancy at Wave 1 is related to compromised parental knowledge at Wave 2, and a high level of parental knowledge at Wave 1 is significantly related to less contact with deviant peers at Wave 2. A potential alternative cross-lagged relationship emerged (Path d3 in Fig. 1), as adolescent delinquency at Wave 1 is significantly related to deviant peers at Wave 2 for both Chinese and American orientations. This is the only alternative path included in the analyses of the hypothesized models described below.
---
Analyses of Hypothesized Models
Structural Equation Modeling (SEM) was used to examine the hypothesized model using Mplus 6.11 (Muthen and Muthen 2011). Both concurrent and longitudinal links, as well as direct and indirect effects among the model constructs, were tested simultaneously. Mplus uses the full information maximum likelihood (FIML) estimation method to handle missing data, so that all the available data can be used to estimate model parameters (Muthen and Muthen 2011).
Four separate models were tested, separately for more and less discrepant parent-adolescent dyads, for both Chinese and American orientations. For all models, the endogenous variable was adolescent delinquent behaviors, and the mediating variables were parental knowledge and deviant peers. Adolescents' age, sex, and place of birth, as well as family income, parental educational level, the direction of the parent-child acculturation discrepancy, and whether the assignment to the more or less discrepant group switched from Waves 1 to 2, were included in all models as covariates.
The model fits are displayed in the last set of rows in Table 4. The four models showed a fair to good fit to the data. Each model explained 9.2-15.9% of the variance in Wave 1 adolescent delinquency, and 43.5-47.7% of the variance in Wave 2 adolescent delinquency.
The coefficients and confidence intervals for our hypothesized paths are also shown in the first set of rows in Table 4. All the hypothesized concurrent relationships among parent-child acculturation discrepancies, perceived parental knowledge, adolescents' contact with deviant peers and adolescent delinquency (a paths) are significant in the models for more discrepant dyads in American orientation. In contrast, parent-child acculturation discrepancy is not significantly related to less parental knowledge (Paths a1 and a3) in the models for less discrepant dyads in Chinese or American orientation, nor for more discrepant dyads in Chinese orientation. Auto-regressive influences are generally significant for parent-child acculturation discrepancy and parental knowledge (Paths b1 and b2). However, with the exception of Model 4, the auto-regressive influence of adolescent delinquency (Path b3) is not significant. In addition, with the exception of the significant relationship between W1 delinquency and W2 deviant peer association (Path d3 in all four models), none of the other cross-lagged paths is significant.
Indirect effects are shown in the second set of rows in Table 4. Concerning our first hypothesis, on mediating effects, only the models for more discrepant parent-adolescent dyads in American orientation yielded significant indirect effects from parent-child acculturation discrepancy to adolescent delinquency. Concurrently, the effect of parentadolescent acculturation discrepancy on adolescent delinquency was mediated by parental knowledge at Wave 1 (Pathway 1), and by both parental knowledge and contact with deviant peers at Wave 2 (Pathway 2). Longitudinally, the indirect effect of parent-adolescent acculturation discrepancy at Wave 1 on adolescent delinquency at Wave 2 was significant via two pathways. The first pathway was via parental knowledge at both waves and contact with deviant peers at Wave 2 (Pathway 3). The second was via parental knowledge at Wave 1, adolescent delinquency at Wave 1, and contact with deviant peers at Wave 2 (Pathway 4).
---
Comparing Models for More and Less Discrepant Parent-Adolescent Dyads
Concerning our second hypothesis, on the difference between more and less discrepant parent-child dyads, invariance tests were used to determine whether the model paths (Paths a, b, c and d3) were significantly different between the two groups; these were conducted separately for American and Chinese orientations. For each orientation, data for more and less discrepant dyads were modeled within the same covariance matrix to account for within-family dependence (Benner and Kim 2009). A model was first fitted allowing all structural paths to be freely estimated between more and less discrepant dyads. Individual paths of the structural model were then constrained, one at a time, to determine if they were significantly different across groups. The Chi-square test was used to determine whether a more constrained model fitted the data significantly worse than a less constrained one.
For American orientation only, invariance tests showed that three paths are stronger in the model for more discrepant parent-adolescent dyads than in the model for less discrepant dyads: the path from parent-child acculturation discrepancy to parental knowledge at Wave 1 (Path a1, χ 2 (1) = 4.47, p < .05), the path from parental knowledge to adolescent delinquency at Wave 1 (Path a2, χ 2 (1) = 5.68, p < .05), and the path from parental knowledge to contact with deviant peers at Wave 2 (Path a4, χ 2 (1) = 7.25, p < .01).
---
Discussion
Parent-child acculturation discrepancy has mostly been studied using cross-sectional data from the adolescent and just one parent in the family, usually the mother (Costigan 2010). The current study used longitudinal data to examine parent-child acculturation discrepancy as an ongoing risk factor for adolescent delinquency, and explored possible variations of this effect between more and less discrepant parent-adolescent dyads in terms of how their different acculturation levels might affect the functions within each family group. The mediating mechanism of this relationship was examined both concurrently and longitudinally. For more discrepant parent-adolescent dyads in American orientation, the relationship between parent-child acculturation discrepancy and adolescent delinquency is mediated by adolescents' perception of parental knowledge and contact with deviant peers, both concurrently and longitudinally.
In the current study, parent-child discrepancies in American orientation, but not Chinese orientation, are indirectly related to adolescent delinquency. The extant literature has been inconsistent on the question of whether orientations towards the mainstream and heritage cultures influence delinquent behaviors in adolescents from immigrant families. For example, Le and Stockdale (2005) found that Asian American adolescents' endorsement of both orientations was related to their delinquent behaviors. In comparison, Juang and Nguyen (2009) found that adolescents' misconduct (i.e., damaging school property, threatening a teacher or hurting a classmate) was not significantly related to orientations towards either American or Chinese culture, but instead to specific cultural values (i.e., autonomy expectations). This finding suggests that the effects of acculturation-related factors on adolescent adjustment may vary according to the specific area being examined. It is possible that only a parent-child discrepancy in American orientation affects adolescent delinquency through the mediating pathway of parental knowledge and contact with deviant peers, whereas a discrepancy in Chinese orientation affects adolescent adjustment through other mediating mechanisms. This possibility seems especially likely considering that the construct measured in the current study-namely, parental knowledge about children's daily experiences-is more likely to be associated with the mainstream culture than with the heritage culture. Future studies are needed to explore whether and how parent-child discrepancy in Chinese orientation may be related to adolescent delinquency in Chinese immigrant families.
The existing literature considers lack of parental knowledge, especially adolescents' perceptions that their parents lack knowledge, to be a risk factor for adolescent delinquency (Crouter and Head 2002). The current study adds to this literature by identifying parentchild acculturation discrepancy as one possible origin of this particular risk factor in immigrant families. Further, this link between parent-child acculturation discrepancy and parental knowledge may take different forms, depending on the various dynamics operating within a given family. In our study, we compared the more and less discrepant parentadolescent dyads within each family. Generally, the parent who is more discrepant from the child in orientation towards the mainstream culture presents more of a risk factor than does the less discrepant parent. Only among dyads in the more discrepant group is parent-child acculturation discrepancy related to deterioration in adolescents' perceptions of parental knowledge, which in turn is linked to more adolescent delinquency. Studies have found that parental knowledge comes from different sources, such as parents' active surveillance and adolescents' voluntary disclosures (Stattin and Kerr 2000). Studies measuring perceived parental knowledge (Soenens et al. 2006) also support this notion. It is possible that both processes, surveillance and disclosure, are compromised for the more discrepant parentchild dyad. In comparison, the less discrepant parent may assume more responsibility for actively tracking the child's activities, because he or she relates to the child better. For their part, adolescents may be more willing to share their daily experiences with their less discrepant parent, as they may feel that this parent understands them.
An interesting finding in the current study is that adolescent delinquency in early adolescence is consistently related to contact with deviant peers in middle adolescence, but not as consistently to delinquency in middle adolescence. In fact, contact with deviant peers during middle adolescence seems to bridge delinquency in early and middle adolescence. This result suggests that it may be ideal to time an intervention for reducing delinquency before early adolescence, when it may be most effective at reducing the long-term consequences of problem behaviors. Early onset of delinquent behaviors is a sign of a life-course-persistent pattern, whereas adolescence-limited delinquent behaviors are more likely to exist only in middle adolescence (Moffitt 1993). As the life-course-persistent pattern of delinquency clearly poses more of a developmental risk, it is important to develop early intervention programs aimed at preventing this persistent pattern from developing.
---
Implications
The current study demonstrates that acculturation discrepancy in parent-child dyads is implicated in child maladjustment. Moreover, it suggests that the parent who is more discrepant poses the greater risk to child outcomes. Intervention programs usually target mothers, or whichever parent in a family signs up for the program (Ying 1999). However, this may not be a good strategy if the participating parent happens to be the less discrepant parent in the family. Rather, it may be more fruitful for future interventions to use a baseline measure to identify and target the parent whose acculturation level is more discrepant from that of the child.
The current study also identifies parental knowledge as a proximal mediator of the relationship between parent-child discrepancy in American orientation and adolescent delinquency. A lack of shared values, beliefs and activities may create misunderstanding and precipitate disagreements among family members. Intervention programs need to facilitate effective communication by providing approaches such as active monitoring and encouraging adolescents' disclosure.
---
Limitations
There are some limitations of the current study. First, families in which only one parent participated, including all single-parent families in the project, were not included in the sample. Thus, our findings may not be applicable to those families. In a similar vein, given the low participation rate, future studies with different samples are needed to examine whether the current findings can be replicated. Second, there are few significant crosslagged relationships between study variables. This lack of significance may be attributed to the gap of 4 years that occurred between data collection waves. Third, although the direction of parent-child acculturation discrepancy was included as a covariate, the current study could not compare model parameters between families with different discrepancy directions. Future studies with larger sample sizes are needed to examine whether the direction of the parent-child acculturation discrepancy has an effect on how it impacts child adjustment. Finally, the current study assumes that a high level of parental knowledge and a low level of adolescent delinquency are adaptive. It is possible, however, that an extremely high level of parental knowledge indicates an overly controlling parenting style, and an extremely low level of adolescent delinquency indicates poor peer relationships, both of which are indicators of adolescent maladjustment. Future studies are needed to examine how various levels of parental knowledge and adolescent delinquency are related to adolescents' longterm developmental outcomes.
---
Conclusion
The current study explored the possible mediating mechanism of the relationship between parent-child acculturation discrepancy and adolescent delinquency, and compared the mediating pathways between more and less discrepant parent-adolescent dyads in Chinese immigrant families. For parent-adolescent dyads more discrepant in American orientation, acculturation discrepancy in early adolescence is an ongoing risk factor for adolescents' engagement in delinquent behaviors, in both early and middle adolescence. These results suggest that future intervention programs need to include the parent whose acculturation level is more discrepant from that of the child. Facilitating better communication between parents and children, thereby increasing parental knowledge during early adolescence, may be the most promising strategy for interventions aiming to reduce adolescents' affiliation with deviant peers and subsequent engagement in delinquent behaviors.
influence adolescent development within a cross-cultural context. Her current research is on parenting practices and adolescent adjustment in Chinese immigrant families in the US. Conceptual longitudinal model linking parent-child acculturation discrepancy, parental knowledge, deviant peers, and adolescent delinquency in Chinese immigrant families. a paths: concurrent relationships between model constructs within Wave 1 or Wave 2; b paths: auto-regressive relationships between the same constructs across Wave 1 and Wave 2; c paths: cross-lagged relationships between distinct constructs from Wave 1 to Wave 2; d paths: alternative cross-lagged relationships between distinct constructs from Wave 1 to Wave 2 Descriptive statistics for raw scores of study variables Descriptive statistics and correlations among study variables in Chinese orientation models Descriptive statistics and correlations among study variables in American orientation models
---
Su
|
While HIV pre-exposure prophylaxis (PrEP) is highly effective, it has arguably disrupted norms of 'safe sex' that for many years were synonymous with condom use. This qualitative study explored the culture of PrEP adoption and evolving concepts of 'safe sex' in Sydney, Australia, during a period of rapidly escalating access from 2015-2018, drawing on interviews with sexually active gay men (n = 31) and interviews and focus groups with key stakeholders (n = 10). Data were analysed thematically. Our results explored the decreasing centrality of condoms in risk reduction and new patterns of sexual negotiation. With regards to stigma, we found that there was arguably more stigma related to not taking PrEP than to taking PrEP in this sample. We also found that participants remained highly engaged with promoting the wellbeing of their communities through activities as seemingly disparate as regular STI testing, promotion of PrEP in their social circles, and contribution to research. This study has important implications for health promotion. It demonstrates how constructing PrEP as a rigid new standard to which gay men 'should' adhere can alienate some men and potentially create community divisions. Instead, we recommend promoting choice from a range of HIV prevention options that have both high efficacy and high acceptability. | Introduction
HIV pre-exposure prophylaxis (PrEP) has transformed the landscape of HIV prevention. It forms part of a series of behavioural and biomedical interventions of varying levels of efficacy that have disrupted the normative power of condoms in HIV prevention discourse from the 1990s onward. Other interventions in this series have included negotiated safety [1], postexposure prophylaxis [2], strategic positioning [3], serosorting [4] and treatment-as-prevention [5]. PrEP is highly effective at preventing HIV [6], and has the advantages of not being coitally dependant and providing receptive sexual partners with an intervention they can use without requiring the insertive partner's cooperation [7].
Despite these advantages, the use of PrEP in populations of gay, bisexual and other men who have sex with men (GBMSM) was initially problematised by some prominent figures in the United States gay community when first approved in the US. Michael Weinstein, president of the AIDS Healthcare foundation, dismissed PrEP as a 'party drug'; Larry Kramer, founding member of both the Gay Men's Health Crisis and activist organisation ACT-UP, described taking a pill to prevent HIV rather than using a condom as 'cowardly'. Freelancer David Duran wrote disapprovingly that PrEP gave 'gay men who prefer to engage in unsafe practices' a way to 'bareback' without having to worry about HIV in a piece memorably titled 'Truvada whores?', referencing the brand name of the medication used for PrEP [8]. (Ironically, PrEP advocates then adopted 'Truvada whore' as a cultural meme promoting PrEP use.) The stark community divisions between those advocating for PrEP and those warning that it could do more harm than good signal the cultural significance of condom-protected sex as normative in HIV prevention discourses for GBMSM, despite the raft of other interventions listed above that had to some extent already displaced condoms [1][2][3][4].
'Safe sex' (or 'safe(r) sex) was a concept generated from the very earliest days of the HIV epidemic. The development of safe sex culture-which included, but was not confined to, condom use-focused on articulating and promulgating menus of sex practices that enabled rich expression and enjoyment of sex while precluding HIV transmission between partners. There are examples of 'safe sex' materials developed even prior to there being certainty that a sexually transmissible virus was the cause of AIDS [9]. Taking collective responsibility for sexual health and the avoidance of HIV transmission among gay men was described by Weeks as a concrete exercise in sexual citizenship, and he suggested that men who failed to do this risked moral pariah status [10].
For many years in Australia, the term 'safe sex' was synonymous with condom use, even though other forms of safe sex were articulated and practiced [1]. Maintaining high prevalence of condom use was deemed critical to controlling HIV incidence by community-based organisations and public health experts alike [11][12][13].
By 2010, however, there was emerging evidence of the effectiveness of new antiretroviral strategies to reduce or prevent HIV transmission to sexual partners, either by suppressing the viral loads of people living with HIV, or through the use of antiretroviral drugs as prophylaxis by HIV negative people-PrEP [14]. One of the normative challenges that PrEP brought to HIV prevention discourse was that it required individuals to acknowledge a risk (condomless or 'bareback' sex) that gay men had been told to avoid for three decades, outside of relationship sex [15]. Although the efficacy of treatment-as-prevention also allowed for consideration of 'bareback' sex, it was premised upon the use of antiretroviral drugs in people living with HIV. Suppression of the infective agent is a time-honoured strategy in infectious disease control and is less contentious in that context, though in practice some HIV negative men remain nervous despite the strong evidence of effectiveness [16]. With PrEP, the focus shifted to the routine use of antiretroviral drugs in HIV negative people potentially for protracted periods of time, an approach analogous to malaria prevention in travellers but on a far greater scale. This shift was described by Thomann as 'the pharmaceuticalisation of the responsible sexual subject' and is connected to 'end of AIDS' discourses that posit HIV prevention as a medical and technological problem [15].
Recent research has also shown both that taking PrEP is associated with lowered anxiety in gay and bisexual men who would otherwise be at risk of HIV [17][18][19], and that clinicians will prescribe PrEP to gay men where there is no clear clinical risk of HIV acquisition, speculating that there might be undisclosed risk factors [20].
To date there has been considerable qualitative research on the willingness of GBMSM to use PrEP, its acceptability [21][22][23][24][25][26], and community perceptions of its value in HIV prevention [27]. Research in Canada and the U.S. has also explored the impact of PrEP with respect to sexual health, communication and behaviour and social and community issues among gay and bisexual men [19,28]. However, there has been little Australian research that explores the meaning of PrEP and how men in gay male sex cultures see it shaping evolving norms of 'safe sex'.
This study investigated perceptions of PrEP and conceptualisations of 'safe sex' during the period of incrementally increasing access in Australia (2015-2018), drawing predominantly upon perspectives of GBMSM, and also on stakeholders comprising HIV community staff and healthcare providers. At the beginning of the study, PrEP was only available through very limited trials and through personal importation. Access changed dramatically in March 2016, when large-scale implementation studies commenced, with more than 10,000 GBMSM enrolled in New South Wales (NSW) [29]. In April 2018, subsidised access under Australia's Pharmaceutical Benefits Scheme made PrEP available nationwide at a standard, subsidised price [30]. Thus, this study spanned a period of rapid change in PrEP access and uptake with data collection beginning in October 2015 continuing until December 2018. The study aimed to explore how PrEP was impacting on sex cultures-how GBMSM saw PrEP as affecting their sex practices, as well as perspectives on how PrEP affected existing cultural norms for HIV prevention.
---
Methods
The Sydney In-depth PrEP study (SIn-PrEP) was a qualitative study that explored evolving norms of 'safe sex' during the introduction of PrEP in Australia.
SIN-PrEP drew on participatory action research methods with respect to data collection, analysis and communication of results [31]. Prior to data collection, a reference group was established to guide the research. This comprised representatives from the local LGBTIQ, HIV positive and transgender community organisations; and two researchers with extensive experience in research on gay male sexuality. This group met regularly in the early period of data collection to discuss initial findings and developments in PrEP access. As data collection progressed, the first author met periodically with representatives of the local community organisation ACON (formerly known as the AIDS Council of New South Wales), to discuss how findings could inform health promotion campaigns under development, and participated in information sessions with the community organisation to discuss implications. Study findings were reported to and discussed with community organisations prior to presentation or publication so that findings could inform development of health promotion campaigns.
---
Recruitment
Data collection commenced in October 2015 and ceased in December 2018. Study participants were drawn from three distinct populations-sexually active GBMSM, clinicians involved in PrEP prescribing, staff working in HIV and LGBTIQ+ community organisations-each with different recruitment strategies. Sexually active GBMSM community participants (n = 31), (hereafter 'gay community participants', as these participants identified as gay) were recruited primarily through the social media channel of a local community-based LGBTQ+ organisation, ACON, supplemented by fliers distributed at gay community organisations, events, venues and word of mouth. This group included HIV negative men taking PrEP, HIV negative men who chose not to take PrEP and men living with HIV. Both cis and trans identified gay men were eligible for the study, and participants were recruited from Sydney, NSW. In 2016 and 2017, there was further targeted recruitment through Kirby Institute research data bases purposively inviting transgender gay men, and gay men on PrEP access studies who reported they had ceased taking PrEP. Only people who had given permission to be contacted for research participation opportunities were contacted using this method.
Clinicians from public sexual health clinics and general practice with high caseloads of GBMSM (n = 6) were purposively selected. Community-based staff (n = 4) were recruited through invitations to major LGBTIQ+ organisations which passed the invitations onto key personnel who then decided whether to participate.
Data collection. Data were collected in the form of in-depth semi-structured interviews for clinicians (n = 6) and gay community participants (n = 31), and a focus group of community-based professionals (n = 4). Interviews were audio recorded and transcribed verbatim by a professional transcriber. Interviews lasted approximately 60 minutes, while the focus group ran for 90 minutes. Interviews were usually held face-to-face, although three gay community participants were interviewed by phone. Participants in the gay community group chose their own pseudonyms. Health care providers were assigned numbers (1-6), as were focus group participants. Gay community participants were interviewed individually as they were discussing very personal issues. Data were collected from community-based professionals in a focus group, as this allowed the for a rich discussion where participants built on each other's views and compared experiences, without privacy risks as they were not discussion their own private behaviour.
All data were collected by the first author, who is a queer-identified woman with extensive networks in the LGBTIQ+ and HIV communities.
Domains of interviews and focus groups. Gay community participants were asked questions about how and why they saw PrEP as relevant to their sexual lives, whether or how it was changing their sexual lives, and how they rated the importance of sex in their lives. HIV negative men were also asked about the importance of remaining HIV negative, in addition to other questions about access to PrEP and adherence for those taking PrEP. Health care providers and community-based professionals were asked about emerging issues in the provision of PrEP, their views on optimal implementation and the challenges of health communication. Community-based professionals in focus groups were asked about the impacts of PrEP of 'safe sex' health promotion, complexities of access and observed changes in community norms.
---
Research ethics
This study was approved by the University of New South Wales Human Research Ethics Committee (approval number HC15305) and the ACON Research Ethics Review Committee (RERC 2015/08).
All participants who participated in face-to-face interviews or focus groups provided written informed consent. Participants interviewed by telephone provided formal verbal informed consent. Participants were not remunerated for their participation.
---
Analysis
Transcripts from interviews and the focus group were reviewed and then coded using NVIVO (v11-12) software. Coding was initially inductive and comprised descriptive (e.g. 'condom use-kills erection') and conceptual codes (e.g. 'citizenship'). Codes were reviewed and mapped in relation to each other, and developed into key themes by the first author, in discussion with reference group members, study investigators and stakeholders, and at formal presentations of preliminary findings. Descriptive themes (e.g. 'STI testing and communication' and 'advocating/explaining PrEP through social media') were further compared and analysed, leading to higher order concepts (e.g. 'Responsibility and care') drawing on Braun and Clarke's six step process of reflexive thematic analysis [32,33].
---
Results
A total of 24 HIV negative gay men currently or recently on PrEP, seven gay men who had never taken PrEP (two HIV positive, five HIV negative), and six healthcare providers took part in semi-structured, in-depth interviews. One focus group was conducted with four community HIV sector staff. Two of the HIV negative men currently or recently taking PrEP were transgender and 22 were cisgender. Gay community participants were aged between 18-53 years (median 38 years; community-based staff and healthcare providers were not asked their ages).
All gay community participants described themselves as sexually active. Many had primary relationship partners or husbands, but also had other regular and/or casual partners. Among those with primary relationship partners, relationship agreements included complete openness, 'don't ask don't tell' agreements, monogamy with exceptions (such as other partners allowed when travelling), playing together (having sex with other partners together) and monogamy. This article draws predominantly on the interview data with gay community participants.
Three major cross cutting themes were identified. 'Changing norms and clashing symbols', encompassed the decreasing centrality of condoms in risk reduction and participants' responses to that, and has a sub-theme on negotiation where the emergent norms are discussed in the specific context of sexual negotiation. 'Stigma' encompassed both stigma related to HIV and stigma related to not taking PrEP. 'Responsibility and care', comprised participants' accounts of their views of activities as seemingly disparate as regular STI testing, promotion of PrEP and/or other risk reduction in their social circles, and contribution to research, which were nevertheless linked conceptually in participants' discourse to 'giving back to' or promoting the wellbeing of their communities.
---
Changing norms and clashing symbols
Participants across all three groups strongly endorsed the idea that established norms of 'safe sex' had changed, and that condom use was no longer central. Although most of the men in the gay community participant group had been having at least some condomless sex before PrEP, nearly all these men, whether on PrEP or not, reported that their own sexual practice had been affected directly or indirectly by increasing PrEP access. This impact was in the form of reduced condom use in casual sex. Among the sexually active men not on PrEP, there was a minority view that PrEP could not and arguably should not replace condom use, as they deemed condom use to be central to STI control. Many of the men on PrEP or those living with HIV, however, deemed curable STIs a minor annoyance only, as can be seen in the following quote.
STIs are not as of concern for me, you know. For the sake of the argument, you go in and get a jab. You go and take a couple of pills, you know, and, and we're fine. HIV's the big one that we don't have a cure for. Teddie, 32, on PrEP For many participants, a shift away from a condom-based norm while remaining protected from HIV brought a new sense of freedom, regardless of the lack of protection from other STIs.
I feel like shackles have been loosened a little. Chukki, 43, on PrEP This freedom was connected to the physical pleasures of condomless sex, as indicated by Mannie, a 35 year old gay community participant who expressed this as "I don't like being fucked by a plastic bag".
Some men however perceived that there were socially valuable aspects of 'condom culture' which they feared were being lost. For these men, condom use had a symbolic value as a marker of caring either specifically for a sex partner or more broadly for 'community' by adopting tangible sexual practices that prevented the transmission of HIV. For men who perceived that condom use could indicate care, there was some concern that PrEP could symbolically erode this.
If someone only wants to fuck you without a condom, then are they actually thinking about the bigger consequences of the act? Steve, 53, on PrEP Other men however used advocacy for PrEP in their virtual and real-life social circles as a way of protecting and promoting community values. I made like some Facebook post about it . . . My words were: it's a way for HIV negative people to be active in fighting HIV. Mark, 24, on PrEP With regard to how PrEP impacted on the concept of an inclusive community, again there were clashing perspectives. HIV positive participants suggested that PrEP was diminishing what they perceived as a sexual division between HIV negative and positive men.
There's quite a big split between condoms, people that use condoms consistently and people that use PrEP. What's sort of happening I think is that people that are on PrEP are a lot more open to sleeping with people that are positive. Mike, 38, HIV+ There were two facets identified in this-firstly, that taking antiretroviral drugs opened HIV negative men up to understanding social issues related to taking a medication associated with HIV, and secondly, that negative men taking PrEP were less likely to serosort (proactively choose partners known or assumed to be the same serostatus) [4]. One of the HIV positive participants, however, who only had condomless sex, said that he still serosorted. I will not choose someone that's, that is HIV negative. [Okay] Yeah. [Yeah] I'd only, I only have sex with people that are HIV positive. Ron, 40, gay community participant, HIV+ Notably, not all HIV negative participants, whether on PrEP or not, were accepting of having known HIV positive men as sexual partners, and in particular were troubled by the idea of condomless sex with a known positive partner despite other risk-reduction interventions such as PrEP or the potential partner having an undetectable viral load. I understand that someone who, has an undetectable viral load is, you know, safe. But, nevertheless, it just kind of plays on your mind. Josh, 45, takes PrEP periodically, such as when travelling.
One HIV negative participant not on PrEP was adamant that he would only have condomless sex with an HIV positive partner if he could see their viral load test results. Like there's guys I've met on-line who, one of them's positive and he wants to do it without the condom. And I said, "I wanna see your [viral load] blood test [results]." Nick, 57, not on PrEP While almost all participants were very clear that they understood that an undetectable viral load meant 'safe sex' from the perspective of HIV risk, several said they would expect a positive person with an undetectable viral load to use a condom. Others admitted that they avoided known HIV positive men as sex partners, though recognising that they probably had had unacknowledged HIV positive sex partners.
---
Negotiation
How risk reduction was negotiated for casual sexual encounters was a major issue of debate regarding changing norms. In sexual negotiation, the massive changes caused by the increasingly pervasive role of on-line sex applications (hereafter 'hook up apps') was as much an issue as the changes in HIV risk reduction occasioned by PrEP and treatment-as-prevention, particularly for older men who were veterans of gay bars and sex-on-premises-venues. PrEP-taking participants were divided as to whether they would list 'on PrEP' on their profiles, as this set up the presumption of condomless sex-on the one hand, this was seen as increasing the attractiveness of a profile (hence increasing sexual capital), but on the other, it would shut down the potential for negotiation.
I figure that the only people who need to know that are the people who are naked next to me. . . if you wanna have sex with me, I actually want to have some connection with you as a human being. Steve, 53, on PrEP Hook up apps were also a medium for discussion of PrEP-both for providing information about it to curious others, and also for heated and sometimes polarised debate about the social and community value of PrEP.
Having PrEP listed on a hook-up app was widely seen as something that forestalled negotiation about HIV risk reduction.
If you do have it on, they take that as like, "Oh, he's going to like be into like bare-back. Like no condoms. Calvin,18, on PrEP Another participant, who was taking PrEP but had to stop due to unmanageable side effects, noted the difference in both volume and quality of responses he got on hook up apps from when he had 'on PrEP' in his profile and when he subsequently removed it.
The minute you put [PrEP] out there [on your profile] people would get straight to the point with what they wanted to do with you. And like, "Oh, okay. This is kind of cool." And then you'll get a lot more of on-PrEP guys message you as well. . .. I'm like, "Whoa! Okay. No! No! Can I have a conversation with you first? See your face first? That'd be nice." You just don't get that [when it's not on the profile]. Sussman, 30, former PrEP user.
For some men, particularly those who expressed some difficulty with negotiating with sex partners, PrEP was a way of protecting themselves without any need for communication about HIV risk.
Basically, I really didn't know how to navigate conversations a lot or I just forgot about conversations in the moment. So this was something . . . I like to think I'm pretty organised so for me being able to do something daily is a lot easier than one thing like when you're with somebody. Lance, 34, on PrEP Several men in the study, including negative men not taking PrEP, talked about having condomless sex with a range of regular fuckbuddies with whom they had established trust relationships.
The people that I do have sex with without a condom who are on PrEP I know are tops. I know that they test regularly and I've, I had a long history with them before. Long-ish history. Max, 39, not on PrEP.
Several of the HIV negative men-both those on PrEP and those not-reported some experience of 'vicarious PrEP' [34] where one partner was on PrEP and the other relied on that for risk management by proxy. While several participants thought that this was an adequate strategy with known and trusted fuckbuddies, it was also strongly criticised by other participants. Thus, while there was a consensus that the sex culture had changed particularly with respect to how sex is negotiated, there were differing views about the meaning of that change in this theme, and whether it was just about more freedom for condomless sex, or whether there was social value in the change.
---
Stigma
Participants spoke about stigma in a range of different ways, and these accounts illustrated some of the many contradictions associated with the arrival of PrEP on the HIV prevention landscape. Some men described how the deliberate avoidance of men with HIV as sexual or relationship partners, which has been well documented [35], still persists even among PrEP users. Many participants also described how they either excluded-or were excluded by-other men because they were not using PrEP. Despite some consensus that PrEP should have contributed to reducing the serodivide between HIV positive and HIV negative men, the stigma associated with an HIV diagnosis was frequently spoken about as a primary reason for wanting to stay HIV negative, and sometimes for avoiding sex with known HIV positive partners even when on PrEP. I do know that there's like medication and it's like manageable, but the stigma scares me. . .I think that's part of the reason I haven't been with an openly positive partner because I'm like even on PrEP I wouldn't wanna take that risk. Calvin, 18, on PrEP.
Many participants perceived that with increased uptake of PrEP, many within gay male sex cultures had become less accepting of HIV negative men who opted not to take it. I understand for some people there's a lifestyle decision around using PrEP but it's not for everyone and the stigma is that, if you're against PrEP or you don't think you need to take it up, that you're somehow an idiot. So that's the new stigma in the community. That, if you're on PrEP, you're a responsible, socially considerate, golden gay. And, if you're not on it, you're somebody who can be poo-hooed and dismissed, and attacked. Justin, 40, not on PrEP. This idea that not using PrEP and wanting condom-protected sex diminished sexual capital was echoed across the different groups of participants. Some participants openly acknowledged that they would reject a potential sex partner if he wanted to use a condom.
If I'm at a sex party . . . if I turn around [and] somebody's put a condom on, I will roll my eyes and get up, and walk away. Jack, 39, on PrEP Jack's reported actions convey not just a 'no, thank you' to prospective partner, but a pointed act of rejection. Other participants reported filtering out prospective partners who wanted to use condoms by positively selecting partners on the basis of PrEP use.
What's your name? Are you on PrEP? Marc, 32, gay community participant on PrEP.
Other gay community participants confirmed that expressing an interest in using condoms was likely to result in rejection.
To be honest with you, if it's in Sydney or Melbourne, you could almost guarantee that a condom's gonna be a deal-breaker for the other person. David, 40, on PrEP. This perception that wanting to continue to use condoms could adversely affect a man's sexual capital was also predicted by one of the health providers.
The sexual, social milieu is going to change and, if you want to have sex, you're going to have to adapt to the new flavour. Unless you're the cutest boy on earth, negotiating condom use is going to become harder. Healthcare provider #3.
HIV community professionals working in a community-based HIV testing site also noted that some men who had previously been condoms users were turning to PrEP due to peer pressure:
These days I'm seeing more and more people come with, have been using condoms until today but they find that they, when meeting people who are on PrEP and they don't want to use condoms, they find that conversation a bit of an issue. So eventually they feel like they are missing out because the guy on PrEP ends up not necessarily having sex with them because they don't want to use a condom. So some people have decided to go on PrEP because they find that their casual partners don't want to have sex with them 'cause they won't use a condom. HIV community professional #4
In this thematic area, there was little evidence of PrEP use or PrEP users being shamed or stigmatised; rather it was men who chose not to use PrEP who reported feeling that their social and sexual capital was diminished. Regarding HIV stigma, many participants accepted that it was a given. While some reflected on how their PrEP use could potentially reduce HIV stigma, one of the key reasons that HIV negative participants gave for wanting to remain HIV negative was to avoid the perceived social burden and loss of sexual capital attached to an HIV positive diagnosis.
---
Responsibility and care
From a range of domains including condom use, sexually transmissible infections (STI), testing, and participating in research, we identified the cross-cutting theme of responsibility and care. That is, participants framed their responses on these issues in terms of either interpersonal responsibility or responsibility at a broader social level. Several participants framed frequent STI testing and subsequent communication of positive results to partners as a considered strategy of "stopping the spread of them as much as I can" (Jack, 39,PrEP). This strategy included testing more regularly than the recommended three months, and testing after significant risk events (such as after a sex party of 20, as cited by one participant). For some participants, this sense of responsibility also extended to wanting to ensure that their sex partners had the skills to reduce their HIV risk. For one participant on PrEP, this meant resisting partners who wanted to rely on vicarious PrEP (that is, assuming that condomless sex is safe because a partner is on PrEP, when not on it oneself). I think you have a moral responsibility to ensure that the person you're actually having sex with is-if you actually have some knowledge and some ability to prevent that person from catching HIV, then, then you need to reinforce it in some sort of way and that's either condoms or PrEP. And, if you can't have the discussion and know that person's gonna be on PrEP in the near future, then you need to reinforce with the condoms. Gordon, 53, on PrEP Two other participants talked at length about how they promoted regular STI and HIV testing in their social circles, particularly to younger friends. I spend a lot of time just checking in on my friends. . . "Hi, how are you? . . . Hey, have you had your tests recently? Mannie, 35, on PrEP.
Several participants talked about the importance of PrEP being available for men in serodiscordant relationships, even if the HIV positive partner had an undetectable viral load and the couple was monogamous, meaning that there was no HIV transmission risk. The rationale for this was so that the HIV negative partner was taking responsibility for his own safety, not relying on his partner's adherence to medication to manage HIV risk.
It may be doubling-up but then it gives the person capacity to, to be responsible for their own safety. Josh, 45, taking PrEP periodically In addition to wanting to take responsibility for their own sexual health, there was also an element of distrust of a partner's undetectable viral load as being a reliable form of safe sex. As noted earlier, some participants voiced nervousness of condom-free sex with known positive partners.
Many men also talked about responsibility in terms of their participation in research to generate data for the good of the community.
One of the reasons I'm happy to do this [interview] however long this takes out of the day is I just think it's a very good thing. [PrEP] has been very good for me and, if I can do things that encourage it to be more readily available and more accessible, I'm happy to do that.
---
Ian, 53, on PrEP
The concept of being a responsible sexual subject was important to the gay community participants in this study, regardless of whether they were HIV negative or positive and whether or not they took PrEP. While for some condoms remained important both practically and symbolically, others were actively reframing practices such as STI testing as ways of taking responsibility. This concept of research participation as a way of enacting a responsible attitude to community was also raised repeatedly by participants-this was not related to a question asked by the interviewer but volunteered spontaneously by several participants.
---
Discussion
This study explored the impact of PrEP on evolving gay male sex cultures focusing on the perceptions of gay men in Sydney, Australia, and included perspectives from health service providers and community-based stakeholders. The findings reflect that the meaning of PrEP in the lives of these men needs to be understood in the context of sex cultures deeply inflected with norms that arose in response to the risk of HIV. Taking PrEP can provide access to the pleasure of condomless sex without HIV risk, but it also disrupts decades of community norms where practices of risk reduction-condom use, serosorting [4], negotiated safety [1], strategic positioning [3]-all required negotiation and had to some degree become associated with a demonstration of care for self and other, sometimes described as 'sexual citizenship' [10]. The displacement of older 'safe sex' norms did not, however, indicate that participants were less invested in community. Many of the PrEP-taking men in this study talked about how other practices related to PrEP such as frequent STI testing and proactive partner notification of diagnoses, advocating for and educating others on PrEP, and participating in research could also be construed as acts of care for partners and community [36], or a new form of 'citizenship'.
In considering the impacts of PrEP uptake on the sexual culture, we explored how discourses about PrEP contributed to shaping a normative goal of a new 'safe sex' culture that embraces a much broader menu of options [37]. We contend that the aspirational social norms articulated by the participants and discussed herein comprise a sex culture in which risks are minimised, participants have a fair chance of finding sexual satisfaction regardless of HIV serostatus or choice of HIV risk reduction intervention, free from stigma and discrimination, with community practices that sustain and promulgate these norms. In each of these three areas-minimising risk, having discrimination-free satisfying sex, and developing and sustaining community practices that support these norms-there were areas of contention.
Nearly all the gay community participants reported that their own sexual practice had changed with increasing community uptake of PrEP, in that they were less likely to use condoms in casual sex. This echoes findings of Newman et al and Pantalone et al [19,28] but contrasts with a 2017 U.S. study that found that participants reported that while PrEP brought a feeling of relief or reprieve from HIV stress, it did not directly impact their practice [38]. The difference with the 2017 study may reflect increasing community confidence with the effectiveness of PrEP.
Confidence in PrEP did not, however, necessarily mean that participants were comfortable having sex with known HIV positive partners. While some participants-particularly those in serodiscordant relationships-were very clear that such sex would be 'safe', others expressed avoidance of sex with known positive partners despite taking PrEP. These participants themselves recognised this avoidance as irrational, given that the point of PrEP is to prevent HIV acquisition and that they had likely had sex with undisclosed HIV positive partners. Thus, while some of the HIV positive men saw PrEP use as dissolving some of the barriers to sex between people of different serostatus-'bridging the serodivide' [39], some HIV negative men continued to have discriminatory attitudes towards known HIV positive partners. This contrasts with results from two separate U.S. based studies [18,28], which both found that PrEP uptake helped to diminish feelings of stigma toward men with HIV. Again., this difference may be due to increased confidence with PrEP efficacy, as the U.S studies recruited later than our cohort.
Within our cohort, there was also evidence of a significant bias against men who opted to use condoms as their primary risk reduction method, echoing findings of both Newman et al and Pantalone et al, who noted increased pressures for condomless sex and increased challenges in negotiating condom use [19,28] This finding in three separate studies leads to a disquieting conclusion that opting to use condoms as primary risk reduction and/or a making a disclosure of HIV positive status, could diminish an individual's sexual capital and limit opportunities for satisfying sex.
With regard to supportive community practices that respect diversities and different choices, some men saw the combination of PrEP and hook-up apps as decentring communication around sexual practice and eroding the community building that some associated with sexual negotiation around condom use. Nevertheless, they reported enjoying the sexual freedoms afforded by PrEP.
The finding that non-use of PrEP could be stigmatised was also seen in a Canadian study [40]. Orne and Gall used a model of 'PrEP citizenship' to explain how widespread PrEP uptake produced a culture of conformity to PrEP-centred regimens. This model included taking up PrEP ('conversion'), advocating it to others ('evangelising') adherence, ('self-governance') repeat testing ('surveillance'), and posited non-users as 'potentially infectious' and 'stigmatised and irresponsible people' (p. 657) as distinct from the 'good citizens' taking PrEP. This model has parallels with Thomann's neoliberal sexual subject who acknowledges HIV risk [41], takes pre-emptive pharmaceutical action against it, and becomes 'biomedically responsibilised'. Both Thomann's and Orne and Gall's analyses foregrounded how 'PrEP advocacy' or 'demand creation'-as distinct from advocacy for a choice of HIV prevention interventions available to all-can marginalise those who make different choices, such as the choice to use condoms. Evidence from this study supports that contention, in that some participants took up both PrEP use and PrEP advocacy as 'the' response to HIV prevention, which alienated men who did not want to take antiretrovirals preventatively. Of note, however, some PrEP takers in this study resisted discourses of conformity to universal PrEP use and continued a champion a range of options depending on circumstances. In particular, some participants discussed PrEP use in the context of travel as distinct from during everyday life, given that for some travel was an opportunity for non-relationship sex including within the context of a relationship agreement. This phenomenon further breaks down the binary of 'PrEP user' and 'non-user' [19], and documents a new form of risk-reduction adaptation.
The qualitative approach of this study enabled a rich and nuanced analysis of the evolution of safe sex norms concomitant with the advent of PrEP. While the specific impacts of PrEP on HIV risk reduction practice was one focus, our other focus on normativity within these sex cultures illuminated how care can be demonstrated between casual sex partners when the problem of HIV risk has been largely dealt with by a daily pill, and how differences in values could or should be accommodated in a sex culture that aspires to not discriminate on the basis of serostatus or choice of HIV risk reduction method.
PrEP access in Australia was at least four years behind the U.S. approval in 2012, as the first large scale implementation study in Australia began in 2016 [29] and subsidised national access began in 2018 [30]. This time lag between Australia and the U.S.-and the fact that Australian community-based HIV organisations had to work hard to achieve subsidised access [42]-may in part explain why there was a less severe anti-PrEP backlash once the intervention was available. The Australian HIV community sector, health care providers and sexually active gay men had seen the 'Truvada whore' controversy [8]-which stereotyped PrEP users as promiscuous and irresponsible-play out in the U.S. before PrEP was widely available. The context of having no nationally accessible, funded mechanisms for PrEP access in Australia some four years after the FDA approval arguably contributed to heightening pro-PrEP sentiment [41], because the global connectedness of gay male communities allowed men in Australia to witness the sexual freedom that PrEP facilitated in the U.S. and recognise the advantages it could bring.
This study has some limitations. Gay community participants had to contact the researchers to take part in the study, so those with strong views on the impacts of PrEP may have been more likely to volunteer. The majority of participants were white, but we did not collect data systematically on ethnicity. Accordingly, the study may overrepresent the views of white gay men. Data were also collected over a period of three years during a period of rapid change, so are not a snapshot of a point in time, but a collection of perspectives that were in the process of evolution. Most of the study participants were taking PrEP, and a significantly smaller number of HIV negative men not on PrEP and HIV positive men were included, so while the sample includes perspectives from a range of different actors, they are not equally sampled. Finally, as this paper is about the impacts of PrEP on a sex culture, the voices of the gay community participants have been privileged over those of the healthcare providers and HIV communitybased professionals.
---
Conclusion
The impacts of PrEP are complex and need to be considered in the context of evolving gay male sex cultures in which PrEP is only one element. PrEP was not the catalyst for condomless sex for most of the men in this group, but the introduction and scale-up of PrEP access arguably enabled men to talk about condomless sex more openly, and to consider what matters in gay male sex cultures where condom use is decentred. This study has important implications for health promotion. It reveals how new community conversations about HIV prevention can promote PrEP use as the single best option, constructing it as a rigid new standard to which men 'should' adhere, instead of promoting and promulgating choice and genuine acceptance that different values can mean that different options may work better for some individuals. The identification of a potentially damaging emerging norm in these data, that of PrEP use as being positioned prescriptively as the 'best' form of HIV prevention for HIV negative men and stigma attaching to non-use, informed the development of ACON's 2017 campaign 'How do you do it?, in which the importance of individual choice from a range of effective options was emphasised with respect to HIV prevention [43].
While recognising the great importance of PrEP for many men, this study suggests that, rather than promoting PrEP as the new 'safe sex' orthodoxy, there is a need to ensure that there is a range of HIV prevention options that have both high efficacy and high acceptability. Accordingly, health promotion should focus on building community attitudes that respect diversity and challenge the primacy of any one prevention tool.
---
Data cannot be shared publicly because it contains sensitive information that the study participants did not consent to have shared. Data access queries may be directed to the UNSW Human Research Ethics Coordinator (contact via [email protected]. au or via + 61 2 9385 6222).
---
Author Contributions
Conceptualization: Bridget Haire, Dean Murphy, Lisa Maher, Iryna Zablotska-Manos. |
Background: General Practice (GP) seems to be perceived as less attractive throughout Europe. Most of the policies on the subject focused on negative factors. An EGPRN research team from eight participating countries was created in order to clarify the positive factors involved in appeals and retention in GP throughout Europe. The objective was to explore the positive factors supporting the satisfaction of General Practitioners (GPs) in clinical practice throughout Europe. Method: Qualitative study, employing face-to-face interviews and focus groups using a phenomenological approach. The setting was primary care in eight European countries: France | Background
The low appeal of General Practice and primary care as a career option is a recurrent problem for healthcare systems throughout Europe, USA and other countries in the Organization for Economic Cooperation and Development (OECD) [1,2]. A high-performing primary healthcare workforce is necessary for an effective health system. However the shortage of health personnel, the inefficient deployment of those available, and an inadequate working environment contribute to shortages of consistent and efficient human resources for health in European countries.
The European Commission projects the shortage of health personnel in the European Union to be 2 million, including 230,000 physicians and 600,000 nurses, by the year 2020, if nothing is done to adjust measures for recruitment and retention of the workforce [3]. Research has shown a strong workforce in General Practice is needed to achieve an efficiency balance between the use of economic resources and efficient care for patients. [4].
Most of the research focused on the GP workforce concentrated on negative factors. The reasons students did not choose this as a career or GPs were leaving the profession were widely explored. Burnout was one of the most frequently highlighted factors [5]. In many OECD countries, apart from the United Kingdom, the income gap between GPs and specialists had expanded during the last decade, promoting the appeal of other specialties for future physicians [6]. Health policy makers, aware of the problem of a decreasing General Practice workforce, tried to change national policies in most European countries to strengthen General Practice. Health professionals respond to incentives but financial incentives alone are not enough to improve retention and recruitment. Policy responses need to be multifaceted [7]. Dissatisfaction was associated with heavy workload, high-levels of mental strain, managing complex care, expectations of patients, administrative tasks and work-home conflicts. Focusing on these issues created a negative atmosphere [5,[8][9][10]. In the above mentioned report of the European commission on recruitment and retention of Workforce in Europe, the authors used a model of Huicho et al. as a conceptual framework to analyze the situation [11]. Attractiveness and retention are two outputs used in the model. Retention is determined by job satisfaction and duration in the profession.
The concept of job satisfaction is complex as it changes over time according to social context. "Job satisfaction is a pleasant or positive emotional state resulting from an individual's assessment of his or her work or work experience" [12]. There is a weak relationship between enjoyment and satisfaction, suggesting that other factors contribute to job satisfaction [13,14]. Furthermore, general practice is a specific field and theories on job satisfaction in this field are not fully explained by theories on human motivation in general. According to the research group hypothesis, it was important to investigate the positive angle separately in order to understand which factors give GPs job satisfaction. That was the focus chosen by the research team.
The literature highlighted the poor quality of the research about job satisfaction within European General Practice. Most studies were carried out by questionnaire [15], focusing on issues of health organization or business and did not reach the core of GP daily practice. Some studies had confusion bias caused by authors' prerequisites on the attractiveness of General Practice [16]. Surprisingly few qualitative studies explored the topic of satisfaction [17,18]. Literature did not show an overall view of GPs' perception of their profession. It was not certain that these positive factors were similar across different cultures or in different healthcare contexts. Consequently, research into positive factors, which could retain GPs in practice, would help to provide a deeper insight into these phenomena.
The aim was to explore the positive factors supporting the satisfaction of General Practitioners (GPs) in primary care throughout Europe.
---
Method
This research is descriptive qualitative study on positive factors for attractiveness and retention of General Practitioners in Europe.
---
Research network
A step-by-step methodology was adopted. The first step was to create a group for collaborative research [19,20]. The EGPRN created a research group involving researchers from any country wishing to participate: Belgium (University of Antwerp), France (University of Brest), Germany (University of Hannover) and Israel (University of Tel Aviv), Poland (Nicolaus Copernicus University), Bulgaria (University of Plovdiv), Finland (University of Tampere) and Slovenia (University of Ljubljana). Undertaking such a study in several different countries, with different cultures and different healthcare systems, presented a challenge. This has been made possible by the support of the EGPRN in the various meetings held throughout Europe.
Figure 1 gives an overview of the position of the general practitioner in each country, according to the different healthcare systems.
The authors scored the importance of some specificities of practice in their own country from 0 (not important) to 5 (very important).
The research team decided to conduct a descriptive qualitative research study, from GPs' perspective, in each participating country [21,22]. The first interviews were completed in the Faculty of Brest, in France. The aim was to pilot the first in-depth topic guide.
---
Participants
GPs were purposively selected locally using snowballing in each country. Participants were registered GPs working in primary care settings. To ensure diversity, the following variables were used: age, gender, practice characteristics (individual or group practices), payment system (fee for service, salaried), teaching or having additional professional activities. The GPs included provided their written informed consent. GPs were included until data saturation was reached in each country (meaning no new themes emerged from the interviews) [21,23,24].
Overall, 183 GPs were interviewed in eight different countries: 7 in Belgium, 14 in Bulgaria, 30 in Finland, 71 in France, 22 in Germany, 19 in Israel, 14 in Poland and 6 in Slovenia. In each country, the principle of obtaining a purposive sample was observed and GPs were recruited until data sufficiency was reached. Four qualitative studies were achieved in France. In France, it was always the intention to include more participants than in the other countries, with a view to exploring potential differences between practice locality, gender, type of practice and teaching activities. One study was carried out by five focus groups, which brought together 38 GPs; the three other studies used individual interviews (11 participants, 6 participants, and 14 participants). The other countries conducted one qualitative study each. The research activities were undertaken in Germany by focus groups, in Israel using focus groups and individual interviews and in the other countries by individual interviews.
---
Study procedure and data collection
The research team discussed every step of the study, in two annual workshops, during EGPRN conferences, within the duration of the study.
As there were few examples in the literature and, as the existing models of job satisfaction were more oriented towards employees working in a company, the international research team developed an interview guide based on their previous literature review [16]. The guide was piloted in France and was adapted and translated to ensure a detailed contribution from the GPs interviewed and, subsequently, a rich collection of qualitative data in each country. Local researchers conducted the interviews in their native language. In accordance with the research question interviewers were looking for positive views. Overall interviewers were GPs working in clinical practice and in a university of college, except in Belgium where the interviewer was a female psychologist, working in the department of GP. The GPs were first asked to give a brief account of a positive experience in their practice (ice-breaker question) [21]. The interview guide (Table 1) was used to encourage participants to tell their personal stories, not to generate general ideas but to focus on positive aspects.
To ensure a maximal variation in collection techniques, in order to collect both individual and group points of views, interviews and focus groups had to take place. Saturation (no new themes emerging from data) had to be reached in each country [21].
---
Data analysis
A thematic qualitative analysis was performed following the process described by Braun and Clarke [25].
In each country, at least two researchers inductively and independently analyzed the transcripts in their native language using descriptive and interpretative codes. They issued a verbatim transcript of one particular part of, or sentence from, the interview to illustrate every code in the codebook. Each code was extracted in the native language and translated into English. The contextual factors were explored in each setting by the local team of researchers and these factors were taken into account during the analysis. Then the whole team discussed the codes several times in face-to-face meetings during seven EGPRN workshop meetings. The research team merged the national codes into one unique European codebook. During a two-day meeting, the research team performed an in-depth exploration of interpretative codes and a final list of major themes was generated. Credibility was verified by researcher triangulation, especially during data collection and analysis. During the EGPRN workshops, peer debriefings on the analysis and the emerging results were held. Interviewers and researchers from such diverse backgrounds as psychology, sociology, medicine and anthropology reflected on the data from their own researcher's perspective.
---
Results
Table 2 gives an overview of the characteristics of the participants. The mean age was high which is an indication of a long duration in the profession.
Six main themes were found during analysis. The results are summarized in the Fig. 2: International codebook on GP satisfaction.
---
GP as a person
The analysis of the data showed that the GP was a person with intrinsic characteristics, including interest in people's lives, with a strong ability to cope with different situations and patients. GPs loved to practice and the passion for their job was more important than the financial implications.
"I also work with a very heterogeneous population, ultra-religious and secular, from various countries of origin" (Israel).
"Really pleasant to work with patients, it's not only the financial aspect" (Bulgaria).
"I work for pleasure. I don't do it for the money. If I don't like it anymore I'll stop doing it" (Belgium) GPs said they wanted to stay ordinary people with a strong need to take care of their personal wellbeing. This The significance of the GP's residential environment? Topic 6
Coping strategies to overcome difficulties was more than just having time for hobbies and leisure. GPs were looking for other intellectual challenges and personally enriching activities in their free time.
"General practice is a beautiful profession but you are on your own too much, even in a group practice. You see the community from a limited perspective. It's important to keep in touch with the community. The fact remains that you are probably a father or mother or a partner, as well as being a physician. It's interesting to have a different perspective: it broadens your way of thinking. Reading books is the same. It's essential to read good books and to empathize with the characters. This is enriching for you as a human being, but also for your practice." (Belgium).
GPs said they wanted to be there for their patients, to find common ground with them, but they also wanted to control the level of involvement with their patients. They described the ability to balance empathy with professional distance in their interaction with patients and being able to deal with uncertainty in the profession.
The GP as a person theme was important, as all the above conditions were required in order to be a satisfied GP who wishes to remain in clinical work
---
GP skills and competencies needed in practice
GPs reported satisfaction about making correct diagnoses in challenging situations, with low technical support, and being rewarded with patients' gratitude. The intellectual aspect of medical decision-making led to effective medical management and was a positive factor for GPs. General practice is the first point of care for the patient and GPs felt themselves to be the coordinators and managers of care and the advocates for the patient. To be the key person in primary care requires strong inter-professional, collaborative skills and effective support from other medical specialties and from paramedics.
GPs believed that it was highly important to be an efficient communicator to perform all these tasks. GPs were patient-centered and wanted to provide care using a comprehensive and a holistic approach. A patient centered approach is a WONCA core competency of General Practice while efficient communication with the patient is a generic skill for all health workers.
They wanted to bring together a broad medical knowledge with a high level of empathy, balancing the patient's concerns with official guidelines. Guiding the patient's education was an important role for the GP, who was also a coach for life style changes. This theme was linked to the holistic model for General Practice which is also a WONCA's core competency.
"To be both competent and do a bit of everything" (France).
"This is intellectually extremely stimulating and challenging work" (Finland).
"Happy and satisfied when making the correct diagnoses" (Bulgaria).
"The patient arrives and thanks me for the good diagnoses" (Poland).
"You don't just see common colds during the day. You get interesting cases and you have time to explore them. This makes general practice interesting. It's a 360°job. Variation is important". "It's our task to empower young Muslims to encourage them to study well, to become nurses or physicians". Belgian GP
---
Doctor-patient relationships
Patients are free to choose their GP and this is important because of the particular aspects of the doctorpatient relationship in primary care. There was a strong relationship between the GP as a person and the GP who enjoyed a rewarding, interpersonal relationship with patients. GPs had enriching human experiences with patients which was important to the physician's selffulfillment as a human being. Mutual trust and respect in their relationships were important dimensions. Being a patient-centered physician was a rewarding challenge. GPs felt they were a part of the patient's environment, but with the need to set their professional limits. GPs learned about life through their patients.
GPs said they were ageing with their patients and had a long-term relationship with some of them. They were "real family doctors" and often cared for several generations.
They saw babies grow up and become parents themselves. These unique doctor-patient relationships enhanced GP satisfaction.
"I am the doctor for this whole family and in general practice that is something important" (France).
"Some I got to know when they were small kids and they still come to see me at the age of 18 or older." (Germany).
"We know much more about them than other doctors, because our patients have chosen us" (Bulgaria).
"We accompany patients, throughout pregnancy, cancer and death and from the moment before birth until the age of 99 years and over" (Germany).
"Patients asked for a home visit and insisted I join them at their meal and sometimes I did that but only when they were more like friends… I've had a lot of invitations to weddings…" (Belgium)
GPs also liked to negotiate with patients, to help them to make decisions but also to motivate them to make lifestyle changes.
---
Autonomy in the workplace
Freedom in practice was closely related to work organization, which was important in all countries.
GPs stayed in clinical work if they had chosen their own practice location. The living environment needed to be attractive for the family. GPs wanted to apply personal touches to their consulting rooms, to make choices in the technical equipment they used which suited their personal requirements.
---
Fig. 2 International codebook on GP satisfaction
Even more important was the possibility of choosing work colleagues who shared the same vision of General Practice. Satisfied GPs contributed to the organization of the practice and were influential in decisions about work and payment methods. Where there was a salaried system, GPs wanted to earn a reasonable salary to have a satisfying work-life balance.
Flexibility at work was not to be interpreted as a demand from the management to be flexible in working hours but to have the flexibility to make one's own choices. Most GPs preferred additional career opportunities such as teaching, working in a nursing home and conducting research. To fulfil all these conditions GPs wanted to work in a well-organized practice with a competent support team, with a secretarial service, practice assistants and the necessary technical equipment.
Another condition was an organized out-of-hours service. GPs did not want to be disturbed outside practice hours without prior arrangement.
"This is the most important in our practice that I decide when and how to work" (Bulgaria).
"If someone says that a practice room must be completely impersonal, it has to be interchangeable. I understand this. It's respectful towards the others but a personal touch is important for communicating something about yourself to the patient. That is important." (Israel).
"It is important to have one's own organizational systems and equipment" (France).
---
"I didn't have to do night shifts" (Poland).
---
Teaching general practice
GPs reported that they wanted to acquire new medical knowledge and learn new techniques. They liked to transmit the skills of their job. They were proud of their profession and they wanted to teach and to have an effective relationship with trainees. Teaching contributed to feelings of satisfaction with the profession. GPs mentioned the importance of training in attracting junior colleagues and the positive aspect of the mutual benefit to GPs and trainees. Teaching gave GPs more incentives for their own continued professional development and enabled them to complete their competencies. GPs feel gratified where general medicine is recognized as a specialty at the university and by the public authorities.
"Guiding younger colleagues is the most rewarding part of my job" (Finland).
---
"I like to transmit what I have learned" (France).
"I was a tutor for a seminar group, teaching, I like to do that, those people had to learn, that was very pleasant" (Belgium).
"I am teaching General Practice to students and I have found I have a flair for it. It is really fun!" (Germany).
"I feel good accompanying young trainees through the process of making their choices" (Belgium).
"All that you do in teaching (trainees), transmitting your knowledge to another, improves your accumulated experience. You see yourself through the eyes of others" (Israel).
---
Supportive factors for work-life balance
Factors that supported an efficient work-life balance were the possibility of having a full family life, with a social support network and the opportunity to benefit the whole family by enjoying holidays, money and free time. Money was not the most important issue, but income needed to be sufficient for a comfortable family life, meaning sufficient resources for a satisfying education for the children and the possibility of having regular holidays. GPs found they have job security which enables them to feel secure and free from unemployment worries.
GPs explained that they wanted to choose how to separate professional and private life. They said they wanted to have social contacts in the community, which would give them a broader perspective in terms of their patients. Having relationships with patients outside the practice was important. GPs said they needed to be part of the social community if they were to stay in General Practice. GPs wanted to have a full family life and to keep free time for this. "Family Medicine is an opportunity to be with the family" (Israel).
---
"My family supports me" (Bulgaria).
"I try to keep work and leisure time away from each other... It is important in terms of coping. In my leisure time I have a different role from that of a doctor" (Finland)
---
Country specific themes
Besides those international themes there were some country specific results.
In Poland and in Slovenia even when they were prompted in the interviews, GPs did not mention the importance of teaching.
Belgian GPs said how important discussing the vision and mission involved in starting a group practice was to them. They took time for this process and wanted junior colleagues in practice who would share their vision and their mission. Statements needed to be updated regularly to meet the needs of a changing society and the challenges in health care. Group practices used external coaching to overcome problems.
Vision and mission are important. We started from ten values as respect, diversity, the aim to train young GPs…. You have to renew the vision and mission regularly and to adapt at the changing community.
---
Belgian male GP
French GPs were very attentive to the need for organized continuity of care. The GPs wanted to be there for their patients, but they also wanted to protect their personal lives. The word "vocation" had a religious connotation that displeased some GPs.
Finish GPs appreciated the stimulating working community and multidisciplinary teamwork. In addition, they valued the set working hours and professional development work available in the workplace.
Israeli GPs were proud of their respected position. They preferred a private practice in their own style and stressed the importance of teamwork.
The clinics were, I felt good were clinics that the staff was amazing and enlisted, the nurses were good and the secretaries did the work and there was a feeling that we were working for better medicine. There were weekly meetings where we really thought how to do better, a feeling of teamwork.
For Polish GPs, there were some positive developments in financing medicine, which were providing better opportunities for an effective work-life balance. In Poland, there was a theme, which favoured having a strong union that can influence policy. It gave the GPs an identity as a group.
The fact that I work here as I work, my income is not too high, but still is, make it possible that my kids can attend private schools and don't have to go to normal state schools. Polish female GP
---
Discussion
---
Main results
Throughout Europe, common positive factors were found for satisfaction of GPs in clinical practice. One of the main characteristics of GPs was the need for specific competencies for managing care and communicating with patients. They needed to cope with problems during their career and professional collaboration. GPs were stimulated by intellectual challenges, not only within the profession but they also wanted enough time for personal development outside the workplace, to counterbalance the stress of daily practice.
Positive GPs are persons with intrinsic specific characteristics (open-minded, curious). Participants described themselves as feeling comfortable in their job when they were trained in specific clinical and technical skill areas and had efficient communication skills. The long-term doctor-patient relationship is perceived positively by the GPs. They love teaching all these specific skills to younger GPs and appreciate the feedback and mutual benefit to be found in teaching activities. Finally, GPs need policy support for well-managed practices and out-of-hours services to maintain their optimal work-life balance.
---
Strengths and limitations of this study
To our knowledge, this multinational data analysis from 183 GPs is the first European multicentre qualitative study on this topic [16,26]. This study collected complete and complex data from eight countries. One of the strengths was to study a diverse population of GPs, with different cultures and health systems. Despite these differences, the main satisfaction factors to become a GP and to stay in clinical practice are found in all contexts. For instance, money is important, but it's relative because the idea to have enough to lead a comfortable family life with enough free time is for every GP crucial, although income might vary over Europe.
---
Credibility and transferability
Credibility was verified by researcher triangulation, especially during data collection and analysis. During the workshops, peer debriefings on the analysis and the emerging results were held. Interviewers and researchers from such diverse backgrounds as psychology, sociology, medicine and anthropology reflected on the data from their own researcher's perspective. As the results in several countries with different healthcare systems were very similar, the transferability of data seems possible.
The main weakness was a possible interpretation bias. The 183 GPs provided very rich data in several languages. It was the strength of this research, but also a difficulty. The analysis and interpretation of the verbatim analysis was a linguistic and cultural problem. A different classification of themes could be achieved, but this was limited by the group meetings and the massive number of emails, phone discussions and Skype® discussions required during the research process.
The number of GPs interviewed varied in the different countries, potentially leading to differences in the informational detail and in the depth of the analysis of the interviews/focus groups. However, data saturation was reached in all settings, limiting this possible bias.
---
Discussion of the findings
The theme "GP as a person" was highlighted in this study and in the literature review [16]. The studies found this special identity for GPs was linked to their intrinsic characteristics. The theme of "GP as a person" was important in each of the European countries. A GP is, of necessity, someone with a specific personality, which is suited to General Practice. GPs like to take care of people [27] Feeling of caring » [28]. "I can have a big impact on people's lives" [27]. This is a strong personality characteristic in a GP which policy-makers might take into consideration when formulating policies which concern the medical workforce.
The GP skills and competencies were found in literature [16,29] but in a more restricted form. They focused on an effective medical management of the patient and the subsequent feeling of being competent. In a Scottish qualitative study, GPs highlighted the satisfaction derived from the perception of the consultation outcome. "Although clinical competence was an integral part of the doctors' satisfaction, they alluded to personal attributes that contributed to their individual identity as a doctor" [30]. "Take care of them and do the best you can" [27]. In our study we identified all WONCA core competencies and this is important [4]. Validation of WONCA's characteristics and competencies in hundreds of interviews across eight European countries shows the strength of the WONCA theorem and common characteristics between GPs wherever they work. The analysis of the data demonstrated a strong link between competence and satisfaction. It is necessary to give general practitioners the opportunity to acquire and improve these skills.
The importance of the doctor-patient relationship was described as an effective factor in job satisfaction for the General Practice workforce [31,32]. Nevertheless, previous studies concentrated less on the rewarding nature of the relationship, its long duration and the mutual interaction.
Freedom to manage the workplace organization has been described and is confirmed here. It does not prevent long working hours but focuses on the organization of the practice [33][34][35]. There was consistent evidence that GPs needed freedom for work satisfaction [36]. GPs wanted autonomy in their work [17].
The teaching and learning activities have been described and this study confirmed their importance.
Academic responsibilities provide positive stimulation and new perspectives for GPs [17,36,37]. They wanted to be recognized by the academic world. Clerkships in General Practice were seen as important for attracting students to a career in General Practice [38]. The influence on students was important for their career choice [39]. The practice of clinical teaching in initial medical education, with positive role modelling, was also important [40,41].
There was a strong link between the GP, his/her family and the community they are living in. This was especially true for those practising in rural areas [39] [42]. The GP's family was sensitive to the fact that General Practice is a respected profession. Outside their professional role, other forms of satisfaction were important, such as having strong social support from schools, leisure activities and a satisfying quality of life in the residential environment [43], and of course, the importance of an income in balance with their heavy workload.
Finally, the results highlighted a particular theory to describe GP satisfaction which focuses on human relationships, specific competencies, patients and the social community.
---
Implications for medical education and practice
Learning the core competencies of General Practice in initial and continuous medical education is very important and should lead to extended educational programs in Europe.
Mobilizing stakeholders is a necessary condition of success however it is not sufficient [7].
To improve the attractiveness of general practice, universities should organise a specific selection process for GPs, not just for specialists. This might engender greater respect for the profession.
Roos et al. performed a study by questionnaire on the "motivation for career choice and job satisfaction of GP trainees and newly qualified GPs across Europe" [15]. The most frequently cited reasons for choosing General Practice were "compatibility with family life," "challenging, medically broad discipline", "individual approach to people", "holistic approach" and "autonomy and independence". The current study has focused on working GPs and not on trainees, but some of the results overlap Roos' research.
It remains essential to teach undergraduate medical students the bio-medical aspects of general practice, but it is also necessary to teach the management of primary care, interprofessional collaboration and communication skills. Trainees need to think about their own wellbeing and to learn to cope with problems in daily practice. The intellectual aspect of General Practice is important. Decision-makers should use all the means at their disposal to promote the profession by providing continual development.
GPs want to be involved in the management of their practice. Stakeholders should be aware and very cautious about this topic which is described as extraordinarily sensitive. Systems that try to administrate GP practices, without involving the GPs, should be aware that they will experience difficulties.
---
Implications for research
Further studies would be useful with the objective of studying which satisfaction factors have the greatest impact on recruitment and retention in General Practice.
This description of satisfied GPs will be disseminated throughout Europe to implement new policies for a stronger GP workforce. This may assist the international research team in the design of further studies to investigate the links between these positive factors and the growth of the GP workforce. At this stage, the research team will test the usefulness of each positive factor in helping each country to design efficient policies to increase its workforce.
---
Conclusion
Throughout Europe, GPs experience the same positive factors which support them in their careers in clinical practice. The central idea is the GP as a person who needs continuous support and professional development of special skills which are derived from the WONCA's core competencies. In addition, GPs want to have freedom to choose their working environment and organize their own practice and work in collaboration with other health workers and patients.
National policy arrangements on working conditions, income, training and official recognition of general practitioners are important in facilitating the choice of a career in general practice. Stakeholders should be aware of these factors when considering how to increase the GP workforce.
---
Availability of data and materials Some data in this study are confidential. The data generated and analyzed during the current study are not publicly available. But the datasets generated analysed during the current study are available from the corresponding author on reasonable request.
---
Abbreviations EGPRN: European General Practice Research Network; GP: General practitioner; GPs: General practitioners; n/a : not applicable; UBO: Université de Bretagne Occidentale, France.; WONCA: World Organization of National Colleges, Academies and Academic Associations of General Practitioners/ Family Physicians Authors' contributions B LF designed the study, collected data, drafted and revised the paper. H B designed the study, collected data and revised the paper. JY LR designed the study, collected data, drafted and revised the paper. H L collected data and revised the paper. S C collected data and revised the paper. A S collected data and revised the paper. R H collected data and revised the paper. P N revised the paper. R A collected data and revised the paper. T K collected data and revised the paper. Z K-K collected data and revised the paper. T M revised the paper. L P designed the study, collected data and revised the paper. All authors read and approved the final manuscript.
---
Ethics approval and consent to participate
The Ethical Committee of the "Université de Bretagne Occidentale" (UBO), France approved the study for the whole of Europe: Decision N °6/5 of December 05, 2011. The Université de Bretagne Occidentale ethics committee provided ethical approval for recruitment of doctors from overseas because of the low-risk nature of the study, and the practical implications of obtaining ethics from multiple countries for the recruitment of small numbers of health professional participants using snowballing. Further, the participant recruitment strategy detailed above precluded us from preemptively knowing with certainty which countries we would recruit from and prospectively apply for ethical approval from each country. The participants provided their written informed consent to participate in the study.
---
Consent for publication
Not applicable as no personal information is provided in the manuscript.
---
Competing interests
Zalika Klemenc-Ketis and Radost Assenova are members of the editorial board (Associate Editor) of BMC Family Practice. The other authors hereby declare that they have no competing interests in this research.
Author details 1 EA 7479 SPURBO, Department of General Practice, Université de Bretagne Occidentale, Brest, France. 2 Department of Primary and Interdisciplinary Care. Faculty of Medicine and Health Sciences, University Antwerp, Antwerp, Belgium. 3 Centre for Public Health and Healthcare, Hannover Medical School, Hannover, Germany. 4 Department of Family Medicine, Tel Aviv University, Tel Aviv, Israel. 5 Clinical Psychology Department, Nicolaus Copernicus University, Torun, Poland. 6 Department of Urology and General Medicine, Department of General Medicine, Faculty of Medicine, Medical University of Plovdiv, Plovdiv, Bulgaria. 7 University of Tampere, Faculty of Medicine and Life Sciences, Tampere, Finland. 8 Department of Family Medicine, Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia. 9 Department of Family Medicine, Faculty of Medicine, University of Maribor, Maribor, Slovenia.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Participating in physical activity is beneficial for health. Whilst Aboriginal children possess high levels of physical activity, this declines rapidly by early adolescence. Low physical activity participation is a behavioral risk factor for chronic disease, which is present at much higher rates in Australian Aboriginal communities compared to non-Aboriginal communities. Through photos and 'yarning', the Australian Aboriginal cultural form of conversation, this photovoice study explored the barriers and facilitators of sport and physical activity participation perceived by Aboriginal children (n = 17) in New South Wales rural communities in Australia for the first time and extended the limited research undertaken nationally. Seven key themes emerged from thematic analysis. Four themes described physical activity barriers, which largely exist at the community and interpersonal level of children's social and cultural context: the physical environment, high costs related to sport and transport, and reliance on parents, along with individual risk factors such as unhealthy eating. Three themes identified physical activity facilitators that exist at the personal, interpersonal, and institutional level: enjoyment from being active, supportive social and family connections, and schools. Findings highlight the need for ongoing maintenance of community facilities to enable physical activity opportunities and ensure safety. Children held strong aspirations for improved and accessible facilities. The strength of friendships and the family unit should be utilized in co-designed and Aboriginal community-led campaigns. | Introduction
Australian Aboriginal and Torres Strait Islander people possess a rich and vibrant culture and have lived on and cared for the country for over 60,000 years [1]. The sudden disruption to lives and culture brought by British colonization in 1770 has created deep inequities and a high burden of poor health for Aboriginal and Torres Strait Islander people, which has been sustained until this day [2]. This inequity was sustained over the subsequent 200 or more years by a series of racist Australian policy eras resulting in marginalization, disadvantage, and extreme poverty [1]. One of the outcomes for Aboriginal and Torres Strait Islander people has been a decline in physical activity levels [3], contributing to poor health, including the development of chronic diseases such as type 2 diabetes [4]. Chronic diseases represent 70% of the gap in disease burden between Aboriginal and Torres Strait Islander people and non-Aboriginal Australians [5]. Over one-third of the total disease burden Aboriginal and Torres Strait Islander people experience could be prevented by modifying behavioral risk factors such as physical inactivity [6]. Here, we use the terminology 'Aboriginal' to refer to the Indigenous peoples of Australia (other than where Torres Strait Islander people are specifically mentioned in the references supporting this article), as this terminology is preferred by the communities participating in this study.
Whilst nationally Aboriginal children participate in more physical activity than their non-Aboriginal counterparts, this difference has been shown to decrease as children transition to adolescence [7]. Two studies conducted in New South Wales (NSW) reflect this activity decline [8,9]. Gwynn et al. reported that compared with their non-Aboriginal counterparts' rural Aboriginal children aged 10-12 years were engaged in more physical activity [8]; however, by adolescence, physical activity participation rates were lower in a cohort aged 13-17 years (21% compared to 28%) [9]. A gender difference was also identified, with Aboriginal boys more likely to participate in physical activity than girls [9].
Aboriginal communities differ around Australia, not only by virtue of geographical location but also due to differences in factors such as language and culture [1]. It is therefore important to describe the experiences of Aboriginal children from different communities across the nation to gain insight into the breadth of experiences around participation in sport and physical activity and better inform relevant strategies and policies.
Five studies have reported Aboriginal young people's perceptions about physical activity [10][11][12][13][14]. Of these, three (urban locations) explored children's views of their physical activity in relation to type, amount, and the role this plays in their community [11,13,14]. Only two (rural and remote locations) explored physical activity barriers, neither in NSW [10,12]. The barriers identified in the latter studies included poor community facilities, lack of transport, costs associated with participating in physical activity, and experiences of racism [10,12]. Aboriginal adolescent girls were reported as feeling 'shame' ('stigma and embarrassment associated with gaining attention through certain behavior or actions' [15] (p. 8) and shyness wearing swimming costumes in pools and wearing sports clothes to exercise [10,12]. An established relationship between schools and the community was identified as a key facilitator to physical activity participation, as was the involvement and support of family and friends [10][11][12][13][14]. A recent study conducted with Torres Strait Islander communities found that community role models had a positive effect on some barriers to physical activity participation [16]. None of these studies were conducted in NSW, and given the cultural diversity between Aboriginal communities, it is yet to be established how applicable these findings are to young people in that state [17].
A recent systematic review of barriers and facilitators of sport and physical activity among Aboriginal and Torres Strait Islander children and adolescents found limited research (only nine studies) with a number of Australian states not represented [18]. The only study from NSW was not peer-reviewed and reported adult community members' perceptions of the barriers and facilitators for children.
This study was conducted as a sub-study of the Many Rivers Diabetes Prevention Project (MRDPP) in response to that study's findings regarding the physical activity of Aboriginal children [9,19]. The MRDPP aimed to improve the nutrition and physical activity of children living in the North Coast of rural NSW [19] and found physical activity among Aboriginal children declined over time with differences in patterns of decline existing between Aboriginal and non-Aboriginal children [9]. Despite tending to be more active in primary school [8], Aboriginal children from these communities recorded significant declines in non-organized, organized (winter only), and school activity over time when compared with their non-Aboriginal counterparts [9]. To gain insights into this finding and to inform future physical activity health promotion programs, the study team proposed exploring the Aboriginal children's perceptions of barriers in their communities to sport and physical activity participation [19].
This study aimed to explore rural NSW Aboriginal children's perceptions of the barriers and facilitators to their sport and physical activity participation.
The first author of this paper (S.L.) is a non-Aboriginal woman who completed an undergraduate (Honors) degree at the University of Sydney. J.G. is a researcher and non-Aboriginal woman who co-led the MRDPP with N.T. and has worked with the participating communities of this study for 17 years. N.T. is an Aboriginal woman from one of the participating communities who was the Manager Health Promotion and Senior Project Officer of the MRDPP. J.S. is an Aboriginal woman who is also from one of the participating communities and was an Aboriginal Project Officer of the MRDPP. R.P., E.J., and N.A.J. are researchers and non-Aboriginal co-authors who contributed their expertise in physical activity to this research.
---
Methods
---
Study Design
This study utilized a qualitative 'photovoice' methodology derived from the principles of participatory action research. The photovoice method requires participants to take photos which to them represent the topic or issue to be explored. Participants are then interviewed and asked to talk about the photos, typically discussing why these were taken and their meaning. The photos and interviews are the data used in the qualitative analysis. This method crosses cultural and linguistic barriers and enables participants to identify their community's strengths and concerns [20]. Photovoice has been shown to be suitable and culturally appropriate for research with Aboriginal communities exploring issues as varied as food insecurity [21] and the experiences of Aboriginal health workers [22]. In this study, the photovoice method allowed children to explore the environmental and contextual factors that they perceived to influence their sport and physical activity participation [20].
---
Aboriginal Governance Structure and Ethics
The Aboriginal community governance structure and procedures that guided the MRDPP and this sub-study are described elsewhere [23]. Aboriginal Project Officers (APOs) employed in the MRDPP and from the participating communities led the design and implementation of this research, ensuring cultural safety [23]. The APOs also liaised with other organizations, contributed to the thematic analysis, and co-authored this publication. In writing this paper, authors applied the consolidated criteria for reporting qualitative research (COREQ) checklist. This was to ensure transparency with the research methods, and the important aspects of the process of this study were reported [24].
Ethical approval was received from the Hunter New England Local Health District Human Research Ethics Committee (reference number 11/10/19/4.04) and the Aboriginal Health and Medical Research Council of NSW (reference number 824/11).
---
Participants and Recruitment
Aboriginal boys and girls aged 10-14 years, residing in two communities (Community A and Community B) on the mid-north coast of NSW, were invited to participate. Recruitment was undertaken using a 'snowball' approach [25], with APOs contacting parents through the Aboriginal Corporation Medical Services (ACMS) in both communities. Parents were asked to inform their children of this study, and the children who were interested consented to participate. Consenting children then invited their peers to participate. Snowball sampling continued until no further potential participants could be identified [25]. Informed consent was obtained from all participants involved in the study.
A total of 26 Aboriginal children (12 girls and 14 boys) consented to take part in this study. Of these, 18 children attended the introductory session and were given cameras. A total of 17 children (9 girls and 8 boys) returned their cameras, and each participated in an individual yarn about their photos (Figure 1). The number of photos taken per child varied between 8 and 11. Thirteen yarns were audio-recorded, and hand notes were taken for the remaining four due to the community location of the yarn. In Aboriginal and Torres Strait Islander culture, a yarn is a relaxed and informal style of conversation that takes its own time, often flowing around a topic as information and stories are shared and then within the topic until the natural completion of the yarn [26].
varied between 8 and 11. Thirteen yarns were audio-recorded, and hand notes were taken for the remaining four due to the community location of the yarn. In Aboriginal and Torres Strait Islander culture, a yarn is a relaxed and informal style of conversation that takes its own time, often flowing around a topic as information and stories are shared and then within the topic until the natural completion of the yarn [26].
---
Procedure
Parents of potential participants were handed recruitment packages with child and parent information statements and consent forms. Children who signed the consent forms were contacted by APOs via their parents and invited to attend an introductory group yarn in which the study aims, consent process, and study procedures were explained. Each child was provided with a digital camera and informed of its functions. Participants were given a week to take photos of the perceived barriers and facilitators to their physical activity participation in their community. The children also took photos of the physical activities that they enjoyed or wished to engage in. At the end of the week, yarning sessions were undertaken with each child in either a community location or the ACMS according to participant convenience and preference. These were conducted by APOs (J.S. and N.T.) or the lead investigator (J.G.), audio recorded or handwritten where the location was not conducive to audio recording, and audio recordings later transcribed for analysis. Children were invited to yarn about each of the photos they had taken, and these were uploaded to a secure location on the researcher's computer. Prompts were co-designed with the APOs from the participating communities [27].
Once all individual yarns were completed, participants were then invited to a followup group yarn to select photos for community posters. Nine children and two parents (who were also aunties to other participants) took part in the first group yarn in Community A, and five children and one parent took part in the second (follow up) yarn to finalize their choices (Figure 2). 'Aunty' in Aboriginal culture is a term used to describe a respected female Elder in the community who may not necessarily be a family member [28]. In Community B, APOs reached consensus about which photos best reflected the themes arising from the individual yarns with children. Two children and two parents then met for a follow-up group yarn.
---
Procedure
Parents of potential participants were handed recruitment packages with child and parent information statements and consent forms. Children who signed the consent forms were contacted by APOs via their parents and invited to attend an introductory group yarn in which the study aims, consent process, and study procedures were explained. Each child was provided with a digital camera and informed of its functions. Participants were given a week to take photos of the perceived barriers and facilitators to their physical activity participation in their community. The children also took photos of the physical activities that they enjoyed or wished to engage in. At the end of the week, yarning sessions were undertaken with each child in either a community location or the ACMS according to participant convenience and preference. These were conducted by APOs (J.S. and N.T.) or the lead investigator (J.G.), audio recorded or handwritten where the location was not conducive to audio recording, and audio recordings later transcribed for analysis. Children were invited to yarn about each of the photos they had taken, and these were uploaded to a secure location on the researcher's computer. Prompts were co-designed with the APOs from the participating communities [27].
Once all individual yarns were completed, participants were then invited to a followup group yarn to select photos for community posters. Nine children and two parents (who were also aunties to other participants) took part in the first group yarn in Community A, and five children and one parent took part in the second (follow up) yarn to finalize their choices (Figure 2). 'Aunty' in Aboriginal culture is a term used to describe a respected female Elder in the community who may not necessarily be a family member [28]. In Community B, APOs reached consensus about which photos best reflected the themes arising from the individual yarns with children. Two children and two parents then met for a follow-up group yarn.
A repeated reflexive approach was taken throughout the process of finalizing photos deemed suitable for inclusion on posters. In Community A, photos were printed out by the research team and brought to the first follow-up group yarn. Children considered their photos and selected those that best represented their views of barriers and facilitators of physical activity. A parent or caregiver of each participant was present for this process. In Community B, due to local community factors at the time, children did not meet as a focus group to identify their selection. Here, the APOs considered the transcripts and handwritten notes, discussed each child's photos, and reached consensus regarding those that best reflected the issues raised by the majority of participants in their interviews. Participants taking part in the group yarn concurred with the APOs reasoning and choice. A repeated reflexive approach was taken throughout the process of finalizing photos deemed suitable for inclusion on posters. In Community A, photos were printed out by the research team and brought to the first follow-up group yarn. Children considered their photos and selected those that best represented their views of barriers and facilitators of physical activity. A parent or caregiver of each participant was present for this process. In Community B, due to local community factors at the time, children did not meet as a focus group to identify their selection. Here, the APOs considered the transcripts and handwritten notes, discussed each child's photos, and reached consensus regarding those that best reflected the issues raised by the majority of participants in their interviews. Participants taking part in the group yarn concurred with the APOs reasoning and choice.
The final selection of photos (and related texts) was then considered for inclusion in several draft posters of differing designs by the research team. These posters were intended to be facilitators for community discussion of results. APOs invited all participants and their parents to take part in a poster design focus group in each community. Handwritten notes of the discussion were taken from these focus groups, which largely included parental feedback. To add richness to the findings, notes were cross-checked against key themes by the first author, and information relating to these themes was included.
---
Data Analysis
Yarning transcripts and photographs were entered into a qualitative research software package NVIVO Version 11 (QSR International, Melbourne, Victoria, Australia) [29], for thematic analysis. Thematic analysis was informed by Braun and Clarke's six stages, which involved data familiarization, initial coding and searching, and reviewing and defining themes [30]. To enhance the rigor of thematic analysis, S.L. and J.G. independently coded the first three yarns before discussing their similarities and differences. This aimed to reduce subjectivity that can occur when coding is completed by one researcher [31]. The remainder of yarns were coded by the first author. Codes were grouped together by looking at the relationships and connections between them to create categories and, subsequently, subthemes and overarching themes [30]. Preliminary themes along with the original transcripts and photos were sent to the APOs for their review and feedback (written and verbal). This feedback informed the final themes. Posters containing participants' photos and final themes were co-created with the APOs. The final selection of photos (and related texts) was then considered for inclusion in several draft posters of differing designs by the research team. These posters were intended to be facilitators for community discussion of results. APOs invited all participants and their parents to take part in a poster design focus group in each community. Handwritten notes of the discussion were taken from these focus groups, which largely included parental feedback. To add richness to the findings, notes were cross-checked against key themes by the first author, and information relating to these themes was included.
---
Data Analysis
Yarning transcripts and photographs were entered into a qualitative research software package NVIVO Version 11 (QSR International, Melbourne, Victoria, Australia) [29], for thematic analysis. Thematic analysis was informed by Braun and Clarke's six stages, which involved data familiarization, initial coding and searching, and reviewing and defining themes [30]. To enhance the rigor of thematic analysis, S.L. and J.G. independently coded the first three yarns before discussing their similarities and differences. This aimed to reduce subjectivity that can occur when coding is completed by one researcher [31]. The remainder of yarns were coded by the first author. Codes were grouped together by looking at the relationships and connections between them to create categories and, subsequently, subthemes and overarching themes [30]. Preliminary themes along with the original transcripts and photos were sent to the APOs for their review and feedback (written and verbal). This feedback informed the final themes. Posters containing participants' photos and final themes were co-created with the APOs.
---
Feedback of Study Outcomes to Communities
Results in the form of the posters and a verbal presentation with or without powerpoint slides were discussed at meetings with local city council representatives, key Aboriginal community members involved in the MRDPP, and members of the MRDPP Steering committee. Stakeholders were provided with a copy of the final MRDPP report to contextualize the conduct of this study [19]. Results were also presented for discussion at meetings of the Aboriginal Educational Consultative Groups (AECG) in both communities. Minor changes to wording in one poster were suggested and incorporated.
---
Socio-Ecological Framework
Physical activity participation is a complex behavior and is determined not only by the individual or their local environment but by 'broader socioeconomic, political and cultural contexts' [32] (p. ii10). A socio-ecological framework was applied to the barriers and facilitators identified by children to assist in understanding the scope of these complex factors and the 'levels' at which these exist in the participants' environment. We applied the framework used in a recent mixed-methods systematic review of the barriers and facilitators to Aboriginal and Torres Strait Islander children's participation in sport and physical activity [18] and coded the findings according to the levels they described: individual, interpersonal, community, and policy/institutional. In doing so, we aimed to align our findings and contribute to building evidence for practice.
---
Results
Thematic analysis revealed seven key themes (Table 1). Interviews and photos depicted a wide range of sports and physical activities enjoyed by the participants, including different types of football, bike-riding, basketball, soccer, running, and swimming. Photos largely reflected the barriers that participants experienced when accessing physical activity opportunities.
---
Barriers
The physical environment was a key barrier to physical activity, particularly for Community A's participants. Participants cited the littered and vandalized community facilities as a deterrent. Poorly maintained and run-down sporting venues were also reported, with tennis and basketball courts overgrown with grass and no usable equipment (Figures 3 and4). The poor state of these facilities prevented children from playing there despite their desire to.
An 'this is a photo of the basketball court. People used to drink there a lot and they used to like throw beer bottles and now it's all wrecked because of them an' the basketball nets are like, poles are like, falling, tilting, like it's about to fall . . .
---
(P6A.)
Participants discussed their experience of a lack of safety when engaging in physical activity due to hazards in the surrounding physical environment. The presence of litter such as glass in local playgrounds was identified by children as 'dangerous'. During the follow-up yarns, most children described continuing to play in playgrounds and parks despite it being unsafe.
. . . and you can't really see if there's any glass or anything, so you never know when walking around in there. So, it's not very safe.
---
(P2A.)
The lack of designated space for children to engage in sports was identified by participants who also described playing non-organized sports in spaces such as near main roads. This supports children's safety concerns around their physical environment and the lack of accessible and safe places to undertake physical activity. Children identified consumption of unhealthy foods, including processed foods and sugary drinks, as a barrier to engaging in an active lifestyle. They discussed this factor as related to the development of obesity and diabetes, which, in turn, they perceived as having a negative impact on being active. Photos captured unhealthy foods on participants' laps and signs of fast-food stores.
---
[Soft drink] …it can stop us from playing games outside and it could give you diabetes and you can't really like have what you want to eat sometimes... (P7B.)
Well like junk food like would like stop you from a lot of sports, like putting on the weight and like things stuff like that. (P9A.)
The follow-up yarns expressed the view that the proximity and exposure of unhealthy food and drinks was a contributor to the consumption of these discretionary items. Children would pass the corner shop on the way to school, and high schools would sell sugar-sweetened beverages to students.
Participants acknowledged that engagement in excessive screen-based activities was sedentary behavior. In interviews, children acknowledged that screen-based activities displaced physical activity participation and recognized the impacts of this. Photos depicted different types of technology use, including iPads and computers.
---
… sitting down …playing the play station or the phone instead of going out and being active… (P5B.)
The cost to participate and access physical activity opportunities was noted by participants. The high price of transport, sports registrations, equipment, and its maintenance were prohibitive for some parents. The cost barrier for parents hindered children from Children identified consumption of unhealthy foods, including processed foods and sugary drinks, as a barrier to engaging in an active lifestyle. They discussed this factor as related to the development of obesity and diabetes, which, in turn, they perceived as having a negative impact on being active. Photos captured unhealthy foods on participants' laps and signs of fast-food stores.
---
[Soft drink] …it can stop us from playing games outside and it could give you diabetes and you can't really like have what you want to eat sometimes... (P7B.) Well like junk food like would like stop you from a lot of sports, like putting on the weight and like things stuff like that. (P9A.)
The follow-up yarns expressed the view that the proximity and exposure of unhealthy food and drinks was a contributor to the consumption of these discretionary items. Children would pass the corner shop on the way to school, and high schools would sell sugar-sweetened beverages to students.
Participants acknowledged that engagement in excessive screen-based activities was sedentary behavior. In interviews, children acknowledged that screen-based activities displaced physical activity participation and recognized the impacts of this. Photos depicted different types of technology use, including iPads and computers.
---
… sitting down …playing the play station or the phone instead of going out and being active… (P5B.)
The cost to participate and access physical activity opportunities was noted by participants. The high price of transport, sports registrations, equipment, and its maintenance were prohibitive for some parents. The cost barrier for parents hindered children from Children identified consumption of unhealthy foods, including processed foods and sugary drinks, as a barrier to engaging in an active lifestyle. They discussed this factor as related to the development of obesity and diabetes, which, in turn, they perceived as having a negative impact on being active. Photos captured unhealthy foods on participants' laps and signs of fast-food stores. The follow-up yarns expressed the view that the proximity and exposure of unhealthy food and drinks was a contributor to the consumption of these discretionary items. Children would pass the corner shop on the way to school, and high schools would sell sugarsweetened beverages to students.
Participants acknowledged that engagement in excessive screen-based activities was sedentary behavior. In interviews, children acknowledged that screen-based activities displaced physical activity participation and recognized the impacts of this. Photos depicted different types of technology use, including iPads and computers. . . . sitting down . . . playing the play station or the phone instead of going out and being active . . .
---
(P5B.)
The cost to participate and access physical activity opportunities was noted by participants. The high price of transport, sports registrations, equipment, and its maintenance were prohibitive for some parents. The cost barrier for parents hindered children from participating in their desired sport(s). In one photo (Figure 5), a participant held up a sign in front of a petrol station stating; participating in their desired sport(s). In one photo (Figure 5), a participant held up a sign in front of a petrol station stating;
Mum only has $5 left from her pay. I play at [a large regional city] that's not going to get me there and back. (P4B.) Handwritten notes from the second group yarns reported that parents were not aware of the funding and support that may be available to enable their children to participate in organized sport(s).
Lack of access to transport, both public and private, was associated with limited parental finances and availability of public transport, particularly when children lived out of town. Participants were reliant on parents or extended family members for transport to regular sporting competitions or community facilities. The availability of transport depended on family routine and dynamics. The issues with availability and affordability of transport were emphasized during the follow-up group yarns. Children discussed walking due to limited access to transport and this being the least-expensive option.
Five community-level, three interpersonal-level, and two individual-level barriers (Table 1) were identified when the socio-ecological model was applied. Children perceived barriers to participating in physical activity around: the physical environment, particularly the availability of safe and accessible community facilities; lack of parental finances to support sports participation; consumption of an unhealthy diet; and participation in sedentary activities.
---
Facilitators
Family members' participation in sports and/or their sporting achievements were identified in both Community A and B as key factors facilitating physical activity, providing children with important role models for being active.
---
…we started paddling out and I asked Dad if I could have a go. (P9A.) …my brother is surfin' an' we all love surfin'… (P3A.)
Family activities such as fishing were enjoyed on a regular basis. Participants in Community A reported that school facilitated their engagement in regular physical activity. School events, such as the athletics carnival, encouraged children to engage in a variety of sports and to train for them in their own time. The provision of facilities such as the school oval gave children opportunities to engage in physical activity during lunch times. Mum only has $5 left from her pay. I play at [a large regional city] that's not going to get me there and back.
---
I don't do any sports after school but um every lunch time I'm normally playing touch footy or I'm doing basketball, basketball with my friends. (P1A.)
---
(P4B.)
Handwritten notes from the second group yarns reported that parents were not aware of the funding and support that may be available to enable their children to participate in organized sport(s).
Lack of access to transport, both public and private, was associated with limited parental finances and availability of public transport, particularly when children lived out of town. Participants were reliant on parents or extended family members for transport to regular sporting competitions or community facilities. The availability of transport depended on family routine and dynamics. The issues with availability and affordability of transport were emphasized during the follow-up group yarns. Children discussed walking due to limited access to transport and this being the least-expensive option.
Five community-level, three interpersonal-level, and two individual-level barriers (Table 1) were identified when the socio-ecological model was applied. Children perceived barriers to participating in physical activity around: the physical environment, particularly the availability of safe and accessible community facilities; lack of parental finances to support sports participation; consumption of an unhealthy diet; and participation in sedentary activities.
---
Facilitators
Family members' participation in sports and/or their sporting achievements were identified in both Community A and B as key factors facilitating physical activity, providing children with important role models for being active.
. . . we started paddling out and I asked Dad if I could have a go. (P9A.) . . . my brother is surfin' an' we all love surfin' . . .
---
(P3A.)
Family activities such as fishing were enjoyed on a regular basis. Participants in Community A reported that school facilitated their engagement in regular physical activity. School events, such as the athletics carnival, encouraged children to engage in a variety of sports and to train for them in their own time. The provision of facilities such as the school oval gave children opportunities to engage in physical activity during lunch times. I don't do any sports after school but um every lunch time I'm normally playing touch footy or I'm doing basketball, basketball with my friends. (P1A.) Group yarns (Community A and B) reiterated these findings and discussed school as an important factor in helping children form an active lifestyle. The school was an environment that offered a wide range of opportunities to be active and an opportunity for children to engage in sport with their peers. Schools also enabled participation in physical activity through the provision of financial support and transport, both of which addressed factors described as barriers.
Participants enjoyed regular physical activity when they had access to adequate equipment and opportunities. In the final group yarns, participants were enthusiastic about outdoor play/non-organized physical activity as it was enjoyable, there was free choice of activities, and anyone could participate. Despite experiencing the complex barriers that made it difficult for children to be active, including gender role perceptions for one child, participants still desired to engage in physical activity.
I took that picture like that cos it's just saying that some kids actually wanna go in there and use it and stuff. (P2A.) Too old to play football because I am a girl, I still want to play football though.
---
(P3B)
Participants proposed several suggestions to improve opportunities for physical activity in their community. This included better facilities and improved use of space by building community facilities.
. . . the council should put ah real basketball court out the ridge cos we have a lot of space there. (P3A.) Three interpersonal, and two each of individual, community, and institutional facilitators (Table 1) were identified when the socio-ecological model was applied. Facilitators were largely apparent at the individual and interpersonal level, with friends and family key facilitators. At the institutional level, schools were central to many children's ability to take part in sports and physical activity. Children's vision for improvements to their opportunities for physical activity was directed at the community level. They imagined facilities that better suited their community along with better use of space for community facilities.
---
Discussion
This study appears to be the first to explore rural NSW Aboriginal children's perceptions of the barriers to and facilitators of their sports and physical activity participation. We found that the key facilitators of Aboriginal children's physical activity exist at the interpersonal and institutional levels of the socio-ecological approach [18] and are physical activity engagement with friends, the strength of the family unit, and schools presenting opportunities for children to be active. The key barrier to physical activity participation identified by children was at the community level regarding poorly maintained community facilities and related safety issues. Other barriers perceived by participants included: intake of unhealthy foods, excessive screen time, inability to afford physical activity opportunities experienced as costly, and reliance on parents for transport.
The strength of the family unit as a key facilitator for physical activity aligns with the perceptions of Aboriginal children elsewhere [10][11][12][13][14]. Children discussed their family members (parents or siblings) who participated in sport and their sporting achievements as supporting and encouraging their physical activity. This factor is also a prominent facilitator for Aboriginal and Torres Strait Islander adults' physical activity participation [3]. Aboriginal people view physical activity as a collective occupation providing connections with others and the wider community [33]. Aboriginal families (parents and siblings) play a crucial role in supporting children and young people's physical activity engagement through encouragement, role-modeling an active lifestyle, and facilitating activities involving exercise [12,13]. The lack of family involvement has been described as hindering children's physical activity engagement in the Torres Strait and surrounding country [10].
Friends enable physical activity participation through the inherent enjoyment and fun experienced by children being active together in play, general activity, and sport [12]. Participants' enjoyment and desire to participate in physical activity led them to hold aspirations for their community, including how space can be utilized to build community facilities such as a new basketball court. Enjoyment of sport and a desire to remain physically active have also been identified as facilitators to physical activity participation by Aboriginal adults [3,34]. As such, strategies to increase physical activity should explore options where children can also socialize with their peers or within an environment that encourages social connection.
School is experienced by Aboriginal children in this study as an environment that not only has better access to facilities and equipment but fosters socialization with friends. This aligns with findings elsewhere that have identified that an established relationship between schools and the community positively influences young Aboriginal people's engagement in physical activity [12] and that Aboriginal children report school facilities and community events provide them with opportunities to be active [9,11,12].
Deteriorating community facilities and the resulting lack of safety reported by these NSW rural children expands on reports from studies in other Australian jurisdictions regarding rural Aboriginal children's perceptions [12]. These factors present a significant deterrent to physical activity [35]. NSW state government policies and legislations control the availability and quality of community facilities and accessibility of neighborhoods, often through the actions of local councils that it funds [32]. Infrastructure in these communities is primarily funded by rates collected from residents [36]. As rates are calculated on property value [36], and the value of the property is less in the participating communities, fewer funds are available for infrastructure management. We suggest that the potential benefits of supplementing rates with additional funds be considered by local councils to ensure that infrastructure relevant for children's health and wellbeing is adequately maintained in disadvantaged areas.
Participants in this study largely appeared to understand physical activity as engagement in organized sports, such as football, along with related non-organized sport/practice. The availability of relevant, accessible community facilities is therefore important. We note, however, that children did not consider the incidental exercise that takes place from day to day, such as walking to and from community facilities or walking as transport as physical activity. We call for local councils, communities, and schools to consider campaigns to promote alternatives to team sports, such as bike ridingand walking, to support children's understanding that participating in such activities is also beneficial for their health. Such campaigns must be led by and co-designed with Aboriginal communities [27,37].
Children in this study identified the consumption of unhealthy foods and exposure to excessive screen-time as barriers to physical activity. Children described the association of these factors with low levels of physical activity and poor physical health, citing chronic diseases such as diabetes and obesity, both prevalent in their communities [2]. These have not been identified as barriers by young people in previous studies exploring Aboriginal and Torres Strait Islander children's views on their physical activity [10,12] and should be harnessed in the design of future strategies to improve physical activity participation. Sedentary behavior due to time spent on screen-based activities is an issue for all children; however, a national report has found that Aboriginal children spend 25 min more on tech-nology per day than their non-Aboriginal counterparts [7]. This is, therefore, a barrier that also warrants inclusion in programs that address children's physical activity participation.
Participants described parental circumstances around vehicle availability and sufficient finance to afford car-associated costs as barriers to accessing sporting competitions or community facilities. This has also been identified by other young Aboriginal people as a barrier to accessing physical activity opportunities [12]. Transport disadvantage is common for Aboriginal people due to lack of access to and affordability of private and public transport options [38], particularly for those living in rural and remote parts of Australia. Lack of transport has been identified as a key barrier to physical activity and sports participation by Aboriginal and Torres Strait Islander adults [3]. The costs of public bus services in rural NSW have been found to be substantially higher than metropolitan areas and are more than residents are able to afford [39]. A lack of affordable and accessible transport places Aboriginal children at a further disadvantage when accessing physical activity opportunities. We suggest that local councils consider offering (or expanding) a community bus service to support weekend sport participation for children.
The inability to afford to participate in physical activity, including organized sports due to low income, has been noted by young rural Aboriginal people [12]. Aboriginal adults have also stated that the high cost of sports participation relative to their income is a very significant barrier to accessing physical activity opportunities [3,34]. While costs are also cited as a top barrier for other Australian children [40], additional financial barriers exist for Aboriginal people who experience socioeconomic disadvantage more than other Australians and possess a lower weekly household income compared to other households [5]. Associations between low physical activity levels and socioeconomic disadvantage have previously been identified [41], and the high costs associated with sport may contribute to low rates of physical activity for Aboriginal children and youth. In our study, parents indicated that they were not aware of local schemes through sports organizations or local councils to support the costs of children's participation in sports. It has been suggested elsewhere that better promotion of sporting opportunities through local agencies and clubs to young Aboriginal people may influence physical activity participation [12].
The enduring impact of colonization on Aboriginal communities is an overarching driver of the barriers to physical activity participation identified in this study and was identified as such by the APOs on this study when discussing the results. The socioeconomic disadvantage and lower weekly income evident in many Aboriginal communities [5] have been acknowledged as enduring impacts of colonial government policies, which also included regulating income of Aboriginal people, forced disconnection from traditional land, forced removal of children, and marginalization of communities [42]. Marginalization included being required to live in settlements or missions 'out of town' and being either barred from entering a town or segregated if permitted to use facilities [1]. Poor community cohesion and racism were identified by Aboriginal parents from the participating communities as an ongoing barrier to their children being active [19], and also have their origins in colonial-government policies that disrupted and fractured communities [33].
Adopting the principles of co-design [27,36] when developing physical activity programs for Aboriginal children and ensuring that these programs are led and delivered by local Aboriginal community members [43] is recognized as imperative to improving the accessibility and cultural relevance of such strategies [23,33]. However, these approaches are still yet to be widely implemented:
What has been missing from these . . . (government policies since 1989) . . . commitments is the genuine enactment of the knowledges that are held by Indigenous Australians relating to their cultural ways of being, knowing and doing. Privileging Indigenous knowledges, cultures and voices must be front and centre in developing, designing and implementing policies and programs. The sharing of power, provision of resources, culturally informed reflective policy making, and program design are critical elements [44] (p. 1).
Strengths of this study include the use of a novel method of investigating Aboriginal children's perceptions of physical activity participation, allowing their voices to be heard. The participatory action research approach used in this research enabled a flexible response to participant and community needs and supported their engagement at all stages of the study. A reflexive approach to the final selection of photos allowed careful consideration of those that best represented participants' views. The strong Aboriginal community governance structure enabled guidance on all aspects of the research process [23]. Community consultations allowed findings to be discussed with various Aboriginal community members who have been involved in the MRDPP and with local council representatives who wished for additional information. The posters distributed to community stakeholders allowed for further dissemination of results at a local level.
A limitation to this study was that a number of community-level events and challenges unrelated to the study emerged in Community B over the time that the yarns took place. These impacted recruitment numbers and children's participation in follow-up yarns. However, the participation of APOs from the communities to some degree mitigated this issue, and feedback received from the community when results were presented was positive.
---
Conclusions
This photovoice study enabled Australian Aboriginal children from rural NSW to describe their experiences of sport and physical activity participation in their communities for the first time. Results extend the limited representation of Aboriginal children's voices on this topic nationally. The identification of key facilitators at the interpersonal and institutional level and of barriers at the community level offer guidance for future strategies to address improvements in enabling Aboriginal children to participate more fully in the sports and physical activities that they aspire to. Prioritizing the maintenance of community facilities is important in enabling access to physical activity opportunities, and children held strong aspirations for improved and accessible facilities. Transport accessibility, along with the costs of sports participation, continue to be barriers to Aboriginal children's engagement in sport and physical activity and require a whole-government response. The strengths of families and friendships should be harnessed to facilitate participation in sport and physical activity.
Barriers and facilitators identified by Aboriginal children are a result of the enduring impact of colonization on families and communities. Aboriginal community co-design and leadership of all matters of relevance to their communities, including in public health and health promotion, are essential and widely recognized as central to improvements in health and wellbeing [45]. However, the development of policies and programs that embody these approaches is only emerging, and implementation is yet to be fully understood and accepted. Only once this occurs will Australian Aboriginal children be enabled to wholly engage with and benefit from the sports and physical activity that they desire.
Author Contributions: Conceptualization, J.G., J.S. and N.T.; methodology, J.G. and S.L.; software, S.L. and J.G.; validation, S.L., J.G., J.S. and N.T.; formal analysis, S.L., J.G., J.S. and N.T.; investigation, S.L., J.G., J.S. and N.T.; resources, J.G.; data curation, S.L. and J.G.; writing-original draft preparation, S.L., J.G., J.S. and N.T.; writing-review and editing, S.L., J.G., J.S., N.T., R.P., E.L.J. and N.A.J.; visualization, S.L., J.G., J.S., N.T., R.P., E.L.J. and N.A.J.; supervision, J.G.; project administration, J.G., J.S. and N.T.; funding acquisition, J.G., R.P., E.L.J. and N.A.J. All authors have read and agreed to the published version of the manuscript.
---
Data Availability Statement: Restrictions apply to the availability of these data. Data was obtained from the participating Aboriginal communities and are available from the authors with the permission of the representatives of these communities.
---
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
---
Conflicts of Interest:
The authors declare no conflict of interest. |
Adolescence is characterized by heightened susceptibility to peer influence, which makes adolescents vulnerable to initiating or maintaining risky habits such as heavy drinking. The aim of the study was to investigate the association of social capital with longitudinal changes in the frequency of binge drinking among adolescents at public and private high schools in the city of Diamantina, Brazil. This longitudinal study used two waves of data collected when the adolescents were 12 and 13 years old. At the baseline assessment in 2013 a classroom survey was carried out with a representative sample of 588 students. In 2014, a follow-up survey was carried out with the same adolescents when they were aged 13 years. The Alcohol Use Disorder Identification Test-C (AUDIT C) was employed for the evaluation of alcohol intake. Our predictor variables included sociodemographic and economic characteristics (gender, type of school, mother's education, family income) and Social Capital. For evaluation of social capital, we used the Social Capital Questionnaire for Adolescent Students (SCQ-AS). Descriptive and bivariate analyzes were performed (p <0.05). The log-binomial model was used to calculate prevalence ratios (PR) and 95% confidence intervals. The twotailed p value was set at <0.05. The prevalence of binge drinking in 2013 was 23.1% and in 2014 the prevalence had risen to 30.1%. Gender (PR 1.48; 95% CI 0.87-2.52) and socioeconomic status (type of school and mother's education) were not associated with the increase in the frequency of binge drinking. However, higher social capital was significantly associated with an increase in binge drinking by students. Adolescents who reported that they had an increase in social cohesion in the community/neighborhood subscale were 3.4 times more likely (95%CI 1.96-6.10) to binge drink themselves. Our results provide new evidence about the "dark side" of social cohesion in promoting binge drinking among adolescents. | Introduction
Adolescence, more than in any other developmental stage, is characterized by heightened susceptibility to peer influence [1], which makes adolescents vulnerable to initiating or maintaining risky habits such as heavy drinking [2]. People are likely to engage in behaviors that match their perceptions of what is "normative," especially characteristics of those who represent idealized identities, such as high-status peers. Many deviant and risky behaviors are associated with high peer status and it is suggested that some adolescents strive to imitate their high status peers through a process of social comparison [3] which means that adolescents contrast their own sense of values, interests, beliefs, and behaviors with their perceptions of others and, in consequence of this, construct a sense of identity.
Various risk factors for problem drinking among youth have been identified by researchers. The emphasis across studies has been on risk and protective factors [4,5]. There is increasing evidence that social environmental factors influence alcohol consumption and harms among youth. Social capital is one contextual factor that has been related to binge drinking-defined as consuming 5 or more drinks on one occasion- [6] among adolescents. Social capital is defined as the resources-such as social support, trust, and information channels-accessed by individuals through their social networks [7]. Social trust and social participation, have each been protectively associated with alcohol use among high school students [8].
Binge drinking has a strong social component [9,10]. Adolescents are more likely to drink in social settings, allowing for their drinking habits to be visible to peers. The combination of risk taking and the visibility of alcohol use in peer settings may allow adolescents to maintain their social network status and gain popularity [11]. In addition, some studies have shown that binge drinking varies by gender and socioeconomic status, although these associations are not always consistent.
Because both alcohol use and peer influence increase during adolescence, it is critical to consider longitudinal influences of peer groups on the developmental trajectory of adolescent alcohol use [12]. Furthermore, studies that investigated the association between binge drinking and social capital have not attempted to identify differences among the sub-dimensions of the social capital construct [4,13]. The aim of the present longitudinal study was therefore to investigate the association of social capital with longitudinal changes in the frequency of binge drinking among adolescents at public and private high schools in the city of Diamantina, Brazil.
---
Materials and methods
---
Study design and sample
To investigate an incidence of binge drinking, a survey was carried out involving all adolescents enrolled in the public and private schools of the city of Diamantina/MG, Brazil, with a full 12 years during the data collection months of the study. Data related to school addresses and number of students enrolled in each class was obtained from the State and Municipal Education Departments.
Subsequently, 633 adolescents from all 13 public and private schools in Diamantina / MG were invited to participate in the study, being previously notified by telephone to schedule the researcher's visit. At that time, the objectives of the research were clarified and what activities would be carried out at the school. Also presented were an approval of the Ethics Committee and as authorizations of the State and Municipal Secretariats of Education. After the consent of the management and the teaching staff, classes with schoolchildren enrolled in public and private schools in the urban area of the city of Diamantina and who were 12 full years on the day of the exam; authorized by the parents / guardians and agreed to participate in the research (inclusion criteria) were contacted by the researcher during class time, with the teacher's presence, for awareness raising. Adolescents not authorized by parents or guardians or who did not agree to participate in the study were excluded from the study. The researcher explained the purpose of the research and asked the students to answer the questionnaires, ensuring the confidentiality of the answers, as well as the evaluation of student participation.
In the baseline survey (2013), the sample consisted of 588 students (participation rate: 92.89%). The reasons for dropouts were non-authorization from parents/guardians or adolescents (4.62%; n = 28) and failure to complete the questionnaires (2.9%; n = 17). In 2014, a new data collection procedure was carried out with these adolescents when they were aged 13 years. Again, all 13 public and private schools in Diamantina / MG were invited to participate in the study and were previously notified by telephone to schedule the researcher's visit. They only included adolescents authorized by their parents or guardians and who agreed to participate in the study. Thus, the follow-up study involved a sample of 588 adolescents (100%). To achieved a 100 percent follow-up rate, the researchers responsible for the data collection made calls to the homes of students who were not present on the day previously scheduled, which led the researchers to return to some schools more than once. Furthermore, access was relatively easy because researchers live in the region and had close contact with directors of the schools.
---
Measures
The Alcohol Use Disorder Identification Test (AUDIT C), validated for use in Brazil [14]. was employed for the evaluation of alcohol intake. The AUDIT instrument can identify whether an individual exhibits hazardous (or risky) drinking, harmful drinking or alcohol dependence [15]. AUDIT C (the first 3 questions on the AUDIT instrument, which are related to the frequency and amount of alcohol consumed) was used, as this version can be employed as a stand-alone screening measure to detect hazardous drinkers among adolescents [16,17]: a) "How often did you have a drink containing alcohol in the past year?" b) "How many drinks containing alcohol did you have on a typical day when you were drinking?" c) "How often do you have five or more drinks on one occasion?" The latter item was used to identify binge drinking [18]. The response options are never, less than monthly, monthly, weekly and daily or nearly daily. Responses of "never" were coded as 0 in the analysis. "Less than monthly" and "monthly" were coded as 1. "Weekly" and "daily or nearly daily" were coded as 2. Although the AUDIT C was used to measure alcohol involvement, the dependent variable was change in alcohol consumption, calculated from the difference in consumption observed between 2013 and 2014, categorized into "reduced or unaltered frequency intake" and "increased frequency intake" was based only on AUDIT binge item ([c]).
Our predictor variables included sociodemographic and economic characteristics (gender, type of school, mother's education, family income) and Social Capital.
For evaluation of social capital, we used the Social Capital Questionnaire for Adolescent Students (SCQ-AS), which was developed and validated by our research team [19]. The study population included in the development and validation of the instrument was a convenience sample made up of 101 students aged 12 years enrolled in the public and private school systems in city of Diamantina/MG, Brazil. This questionnaire is composed of items selected from the national and international literature and has been submitted to face validation, content analysis and analyses of internal consistency (Cronbach's alpha: 0.71), reliability and reproducibility (Kappa coefficient's range: 0.63 to 0.97) [19]. The factor analysis grouped the 12 items into four subscales: Social Cohesion at School; Network of Friends at School; Social Cohesion in the Community/ Neighborhood; and Trust at School and in the Community/Neighborhood. Social capital scores range from 12 to 36 points, with a higher score denoting higher social capital (Table 1). As a questionnaire designed for children and adolescents, the decision was made to use a three-point Likert scale with response options of I agree, I do not agree or disagree and I disagree. This procedure was based on the target age group and was chosen to avoid confusion during the filling out of the questionnaire. The findings confirm indications in the literature that networks of friends and neighborhood cohesion reflect experiences one shares with one's peers and underscore the importance of the present questionnaire as an assessment tool for measuring social capital. Based on the distribution, to analyze the social capital by the adolescent the social capital variable was dichotomized by median as high (31 points or more) and low (less than 31 points). The difference of the social capital at the follow-up in relation to the social capital at the baseline of each adolescent was calculated to obtain the difference between the measures of social capital in the two evaluations. Thus: total score of social capital at follow-up (FSC) minus total score of social capital at the baseline (BSC) presented three response options increase in social capital (FSC> BSC), reduction (FSC <BSC) and unaltered (FSC = BSC). We treated sex, type of school (public or private), maternal education and family income as time invariant.
---
Statistical analysis
Data analysis was performed using the Statistical Package for the Social Sciences (SPSS for Windows, version 22.0, SPSS Inc, Chicago, IL, USA) and included frequency distribution and association tests. The chi-square test was used to determine the statistical significance of associations between binge drinking and the independent variables (p < 0.05). Given the high prevalence of the outcome (> 20%), we used log-binomial model to calculate prevalence ratios (PR) and 95% confidence intervals [20]. In this study, log binomial models were used to calculate both univariate and multivariable models [20]. The two-tailed p value was set at <0.05.
---
Ethical considerations
This study received approval from the Human Research Ethics Committee of the Federal University of Minas Gerais (Brazil) (COEP-317/11). All parents/guardians signed a statement of
---
Results
The sample comprised 588 students (participation rate at one-year follow-up: 100%). Boys accounted for 48.6% (n = 286) of the sample. Among these participants, the vast majority attended public schools (92.2%; n = 542). A total of 75.2% (n = 442) of adolescents were from families that earned up to three times the Brazilian monthly minimum wage, and 61.60% (n = 361) of the mothers had less than eight years of schooling (Table 2).
The prevalence of binge drinking in 2013 was 23.1% and in 2014 the prevalence had risen to 30.1%, i.e. there was a 7% increase in the prevalence of binge consumption in the period. Of the 452 teens who reported never consuming five or more alcoholic drinks at one time in 2013, 41 started to do so with some frequency in 2014 (Table 3).
According with the changes in the score of social capital total between baseline (2013) and follow-up (2014), 340 (58.4%) adolescents unaltered their social capital total in follow-up; 184 (31.6%) students showed an increase in social capital total in follow-up and 58 (10.0%) showed a reduction in follow-up. Six students did not adequately answer the questionnaire.
Table 4 shows the percentage of the sample related to the in subscales of social capital between baseline and follow-up and its association with the difference on binge drinking between baseline and follow-up. 166 (28.3%) students increased their social capital in the 'Social Cohesion at School' subscale and 457 (78.0%) reduced or unaltered their score of social capital total between baseline and follow-up in the 'Network of Friends at School' subscale. Twenty-six (21.4%) adolescents who reported an increase in the 'Social Cohesion in the Community' subscale also showed an increase in binge drinking at the follow-up and 188 (95.4%) reported a reduction in the 'Trust' subscale and in the binge drinking at the same time (Table 4).
Log-binomial model shows the incidence of binge drinking according to the background characteristics of the respondents. Gender (PR 0.67; 95% CI 0.40-1.13) and socioeconomic status (type of school and mother's education) were not associated with the increase in the frequency of binge drinking. However, social capital was significantly associated with an increase in binge drinking by students (Table 5). Table 6 shows the prevalence ratios of changes in the frequency of binge drinking according to social capital subscales. Adolescents who reported that they had an increase in social cohesion in the community/neighborhood subscale were 3.3 times more likely (95%CI 1.83-6.19) to binge drink themselves. In addition, adolescents who reported that they had a decrease in trust subscale were less likely (PR 0.4 95%CI 0.21-0.91) to binge drink themselves. However, social cohesion at school and network of friends at school subscales were not associated with the outcome.
---
Discussion
The present study examined the frequency of binge drinking among adolescents at public and private schools in the city of Diamantina (southeastern Brazil). The increase in the frequency of binge drinking in the follow-up period was 7% and this increase was fivefold greater among adolescents who exhibited an increase in social capital. Our social capital questionnaire was designed so that we can distinguish the influence of social capital in different contexts that the adolescent is exposed to, i.e. the school environment versus the neighborhood environment. We therefore analyzed the subscales separately. Our findings suggest that adolescents' drinking behavior is more responsive to changes in the neighborhood context and trust, rather than the school context and friendship network at school. The literature suggests that the concept of social capital can be broken down into 'structural' and 'cognitive' social capital [21]. Structural aspects of social capital refer to roles, rules, precedents, behaviours, networks and institutions. These may bond individuals in groups to each other, bridge divides between societal groups or vertically integrate groups with different levels of power and influence in a society, leading to social inclusion. By contrast, cognitive social capital taps perceptions and attitudes, such as trust toward others that produce cooperative behaviour [22].
In contrast to the results of the present study, previous reports found that students from U. S. colleges with higher levels of social capital were at lower risk for binge drinking [5,23]. The discrepancy may be due to differences in the aspects of social capital examined in the different settings. Specifically, the study based on binge drinking in US colleges focused on the structural aspect of social capital-as measured by the participation of students in voluntary activities. [23]. However, students in our Brazilian sample were at greater risk of binge drinking if they reported higher social capital in the cognitive dimension, i.e. feelings of more cohesion in their communities and neighborhoods and they are less likely to binge drinking if they have a decrease in the trust subscale. The difference between these studies, including the age of the subjects, underscore the observation that social capital can have both positive and negative health implications, depending on the form it takes [24]. In samples with older adolescents who binge drinking more often, we may find a richer (e.g., expected gender effects) and possibly more intuitive pattern of results. Individuals who have higher levels of social support and community cohesion generally are thought to be healthier because they have better links to basic health information, better access to health services, and greater financial support with medical costs [7]. However, it is important to consider the impact of complex community factors on individual behaviors. Some factors as social stratification (i.e., the probability of living in certain neighborhoods, which is higher for certain types of persons) and social selection (i.e., the probability that drinkers are more likely to move to certain types of neighborhoods) may affect health risk behaviors, including alcohol use [7]. In addition, previous research highlighted the importance of having trust in the peers with whom adolescents drank alcohol [25]. Young usually drink more with peers whom they trust probably because of a tacit acknowledgement that a friend understood unspoken rules and could be relied upon [25].
Past studies have found that binge drinking is usually performed in groups; therefore, peers play an important role in promoting binge drinking, perhaps due to peer selection or peer influence (socialization) [4,23]. Our results show that social cohesion in the community/ neighborhood subscale was significantly associated with increase in binge drinking and a decrease in trust subscale was related to the decrease in frequency of binge drinking among scholars. Although the literature is well established in relation to peer influence on binge drinking, social cohesion at school and network of friends at school subscales were not associated with the outcome. Drinking is viewed by young people as a predominantly social activity which provides an opportunity for entertainment and bonding with friends [25]. During lifetime, friendships can direct development through support, modeling, and assistance, but the significance of friendships is heightened during adolescence [26]. Previous study showed that adolescents' baseline alcohol use status (drinker/ nondrinker) strongly predicted acquisition of friends exhibiting similar alcohol use patterns twelve months later [27]. Another study among young students [28] that analyzed individual and contextual risk factors for alcohol use (temperamental disinhibition, authoritarian and authoritative parenting, and parental alcohol use) assessed during childhood and adolescents revealed significant variability in the association between alcohol consumption and deviant friends and that deviant friends was a significant covariate of alcohol consumption. Furthermore, this study revealed a significant interaction of Disinhibition × Parental Alcohol Use; the childhood disinhibition interacted with parental alcohol use to moderate the covariation of drinking and deviant friends [28]. The relationship between social environments and binge drinking is complicated in part because of reverse causality or simultaneity. Environmental factors (i.e. school and neighborhood characteristics) may be spuriously linked to binge drinking because, for example, adolescents who live in neighborhoods where violent crime is high and access to illicit substances is easy, may be less likely to be socially connected and more likely to consume alcohol. [29].
Despite being a well-established determinant, the influence of socioeconomic status on health is not well understood and little research has focused on the effects of this aspect on health during adolescence [30]. In the present study, the socioeconomic status was not associated with the increase in the frequency of binge drinking among adolescents. Some studies have demonstrated that adolescents from higher socioeconomic status (SES) backgrounds have a greater propensity to use alcoholic beverages and to engage in binge drinking [4,31,32]. This may be because of higher discretionary income (pocket money) or easier access to alcohol in their homes. However, other studies have found an association between lower socioeconomic status and greater alcohol consumption [16,33], and still others have found no significant association between socioeconomic status and alcohol intake [34,35]. The literature highlighted that differences in results may be partially explained by the use of different indicators adopted such as family income, social class, level of schooling, school type, as well as the considerable variation in cut-off points, as well as the specific culture and the age of the drinker.
In the present study, we did not find statistically significant difference between incidence of an increase in binge drinking and gender. This may be explained by changing gender norms over time, which has made it more acceptable for girls to engage in risky behaviors [36]. In accordance with our results, a longitudinal study used a national data to describe gender differences in health behavior of adolescents and, found that in the case of binge drinking, girl's behaviors have converged with the rates among boys [36].
A limitation of our study is that as the data were derived from self-administered questionnaires, lack of attentiveness should be taken into consideration. Second, despite emphasizing the importance of giving honest responses, the findings may have been underestimated due to self-censoring and/or a suspicion that school authorities could gain access to the answers on the questionnaires. Third, information on the influence of friends and characteristics of friendship networks, such as density, size, quality of contacts, proximity and centrality, was not collected in the present study, despite the fact that binge drinking has been associated with such factors [1-4, 12, 13]. The aim of the questionnaire was to measure social capital that was easily understood and applicable to adolescent students that encompasses the different domains of social capital for this population. Even though this questionnaire did not measure characteristics of friendship networks, such as density, size, quality of contacts, proximity and centrality, it is measures contexts that involve social relationships, such as experiences at school and in the local community, which can exert an influence on the behavior and decisions of adolescents, thereby reflecting health determinants. The Social Capital Questionnaire for Adolescent Students (SCQ-AS) demonstrate that this assessment tool is appropriate for epidemiological studies involving samples of adolescents in the investigation of the association between social capital and risk factors or health determinants. Finally, we cannot generalize findings from this study to older adolescents within Brazilian culture.
---
Conclusion
Binge drinking involves groups of inter-connected people who evince shared behaviors, and is a public health and clinical problem. Targeting these behaviors should involve addressing groups of people and not just individuals [24]. Our results provide new evidence about the "dark side" of social cohesion in promoting binge drinking among adolescents. Social capital interventions must include school and community engagement, parental involvement, and peer participation components to address the complex array of factors that influence adolescent alcohol use.
---
All relevant data are within the paper and its Supporting Information files. |
The present study aims to develop the Race-related Attitudes and Multiculturalism Scale (RRAMS), as well as to perform an initial psychometric assessment of this instrument in a national sample of Australian adults.The sample comprised 2,714 Australian adults who took part in the 2013 National Dental Telephone Interview Survey (NDTIS), which includes a telephone-based interview and a follow-up postal questionnaire. We used Exploratory Factor Analysis (EFA) to evaluate the RRAMS' factorial structure (n = 271) and then proceeded with Confirmatory Factor Analysis (CFA) to confirm the proposed structure in an independent sample (n = 2,443). Measurement invariance was evaluated according to sex, age and educational attainment. Construct validity was assessed through known-groups comparisons. Internal consistency was assessed with McDonald's Ω H and ordinal α. Multiple imputation by chained equations was adopted to handle missing data.EFA indicated that, after excluding 4 out of the 12 items, a two-factor structure provided a good fit to the data. This configural structure was then confirmed in an independent sample by means of CFA (χ 2 (19) = 341.070, p<0.001, CFI = 0.974, RMSEA = 0.083; 90% CI [0.076, 0.091]). Measurement invariance analyses suggested that the RRAMS items can be used to compare men/women, respondents with/without tertiary education and young/older participants. The "Anglo-centric/Assimilationist attitudes" (Ω H = 0.83, α ORDINAL = 0.85) and "Inclusive/Pluralistic attitudes" subscales (Ω H = 0.77, α ORDINAL = 0.79) showed adequate | Introduction
Racism emerges whenever social and individual values, norms and practices of a given group are considered superior to others'. Racism occurs with the particular aim of creating, maintaining or reinforcing power imbalances, as well as the corresponding inequalities in opportunities and resources along racial lines [1]. Similar to most contemporary societies, Australia is characterized by co-existing expressions of cultural diversity on the one hand, and negative impacts of racism on social cohesion on the other [1]. In Australia, the mental health costs directly attributable to racism have been estimated at 235,452 disability-adjusted life years lost, which is equivalent to an average $37.9 billion in productivity loss per annum, or 3% of the Australian annual Gross Domestic Product (GDP) over 2001-2011 [2]. Such a strong relationship is an indication that racism may erode the very social fabric of the Australian society by producing mental disorders and suffering, which unevenly impacts upon racially marginalized groups.
Social conceptions that shape intergroup relations form the common ground upon which intergroup attitudes and discriminatory behaviour take place [3]. From an empirical viewpoint, findings suggest that racist attitudes are associated with racist behaviours and racial-ethnic minorities' experiences of discrimination [4]. Positive attitudes towards diversity, however, are negatively associated with discriminatory behaviour [5]. In this study, we propose to explore attitudes in relation to multiculturalism, a construct of special relevance to the social, economic and political fabric of contemporary Australia [6]. We focus on multiculturalism as an ideology of acknowledging and celebrating ethnic and cultural differences, in which the need for preserving cultural identities is recognized [7]. It reflects a "sensibility and [a] disposition towards cultural differences among large sections of the population" [8]. Data from the 2016 Australian Census revealed that one in three Australians were born overseas, and a similar proportion of individuals speak a language other than English at home. Nevertheless, assimilationist attitudes -expectations of conformation to the dominant culture-often prevail, as opposed to multiculturalist perspectives that accept and praise racial and ethnic-cultural diversity [9]. Understanding attitudes to multiculturalism can contribute to unveil the dynamics of racism and discrimination against minorities in the country, fostering public debate and policy formulation aimed to promote positive intergroup relations [10].
Research on ethnic-racial intergroup attitudes draws from theories on ideological attitudes that explain group-based dominance and social cohesion [11][12][13]. Social Dominance Orientation (SDO), for example, reflects the degree to which respondents believe that hierarchy-based dominance between social groups is natural [14]. Discrimination against minorities, therefore, can be explained by the degree of endorsement of the notion that group-based hierarchies are natural and inevitable [14]. Endorsement of group-based dominance and out-group prejudice tends to increase among those who highly identify with the dominant group, as they represent a mechanism of maintaining the in-group status quo [12].
Research on ethnic-racial intergroup relations in contemporary societies has also explored the Right-wing Authoritarianism (RWA) concept [15][16][17]. RWA is characterized by the endorsement of social conservative values, morality, collective security, group-based social cohesion, and strict obedience to social authorities [15,17]. Those who endorse RWA values can be more sensitive to threats to social stability, being prone to conservative values as to increase their perception of control and collective security [18]. Perception of threat has been shown to mediate the association between group identification and attitudes towards multiculturalism [11]. Those that consider immigrants or ethnic-racial minorities as a threat to the control of resources or maintenance of the dominant social values tend to endorse more conservative/assimilationist attitudes towards multiculturalism [11,19].
Sustaining dominant group status quo can also be achieved by not acknowledging ethnicracial inequalities in the population. The so-called colour-blind racial ideology denies the existence of racism and justifies racial inequalities as a result of personal decisions, meritocratic achievements, and market forces [20,21]. By denying racist practices and racial inequalities, it provides the discursive tools to downplay policy proposals aimed at promoting racial justice and therefore maintains the power imbalance between ethnic-racial groups [20]. Following this perspective, public denial of racism has been pointed as an obstacle to a deeper commitment to multiculturalism in Australia [13,22]. Although the existence of racism is acknowledged, most Australians fail to recognise the existence of Anglo-privilege, a necessary step in reducing the imbalance in resource distribution and political representation among ethnicracial groups [13].
Taken together, the results mentioned above point to the centrality of properly assessing the different facets of intergroup attitudes towards multiculturalism as to inform public debate and contribute to prevent and counteract discrimination. It is important to note that the majority of the available scales used to assess race-related attitudes have been developed and psychometrically examined among U.S. populations [7]. These tools may not be relevant or provide valid/reliable estimates of race-related attitudes in non-US contexts, though, given the considerable contextual dependency of racism. Historiographic and sociological accounts of racial dynamics usually emphasize Australian specificities in terms of colonization, past and contemporary immigration policies, and patterns of cultural diversity as key aspects.
Australia is a settler society that started with a policy of Anglo-celtic migration only. This was later expanded to include migrants from other European-backgrounds (e.g., Greeks, Italians), having only in the 1980s opened its borders to migrants of Asian and Middle-Eastern descent. These and other specificities (e.g., limited involvement in slave trade) cast serious doubts on the idea of simply adapting tools developed in a range of different countries to the Australian context. Just like other multiculturalist societies, including Canada and New Zealand, multiculturalism was debated at a national level as a state-policy in the 1970s. Backlashes from conservative sectors, nonetheless, contributed to prioritise an assimilationist perspective on the implementation of multiculturalism values in society. Australia has also historically dispossessed and oppressed the native Aboriginal Australians since British colonization with ongoing effects until present [23]. Our study does not focus on colonisation and racism faced by Aboriginal Australians as the unique features of these experiences can be diminished when considered under the umbrella of multiculturalism [24].
To the best of our knowledge, two measurement instruments that provide information on racial, ethnic, and cultural acceptance (i.e. race-related and multiculturalist attitudes) have been previously developed and assessed in Australia [7,25]. While the first has focused on intercultural understanding among teachers and students in schools [25], psychometric evaluation of the second was carried out in relatively young and convenience samples of primary and secondary school students (all younger than 15 years-old residing in Victoria) and community members (mean age of 23 years-old with 70% residing mainly in Victoria), which limits their applicability at a national level and among older age groups. Therefore, neither an integrated picture of attitudes towards multiculturalism across the country has yet been delineated, nor a range of strategies to advance racial equity based on this knowledge have been proposed.
The present study proposes the Race-related Attitudes and Multiculturalism Scale (RRAMS) as a measure of attitudes towards multiculturalism. The items were formulated to reflect social ideologies and collective beliefs that potentially influence ethnic-racial intergroup attitudes. The aim of this study was to verify its applicability to the Australian context by assessing the extent to which the RRAMS provides a valid and reliable measurement of multiculturalist attitudes in a sample of Australian adults across all states and territories. In particular, the internal validity of the RRAMS was assessed in terms of its configural structure (i.e., the number of underlying factors), metric properties-the magnitude of factor loadings-, as well as measurement invariance (i.e., whether it allowed meaningful comparisons across sociodemographic characteristics). External validity of the RRAMS was then assessed in term of its construct validity.
---
Methods
---
Study design and participants
This was an Australian population-based study, with data obtained from the 2013 National Dental Telephone Interview Survey (NDTIS), which includes a telephone-based interview and a follow-up postal questionnaire. The NDTIS has been carried out periodically by the University of Adelaide since 1994, and comprises a large national sample of Australian residents aged 5 years and over. The NDTIS is a random sample survey that collects information on the dental health and use of dental services of Australians in all states and territories. The survey also collects data on social determinants of oral health and wellbeing, which include detailed information on sociodemographic factors, such as household income, education, country of birth, remoteness of location and main language spoken at home. For the 2013 survey, an overlapping dual sampling frame design was adopted. The first sampling frame was created from the electronic product 'Australia on Disc 2012 Residential;' an annually updated electronic listing of people/households listed in the White Pages across Australia. Both landline and mobile telephone numbers were provided on records where applicable.
A stratified two-stage sampling design was used to select a sample of people from this sampling frame. Records listed on the frame were stratified by state/territory and region, where region was defined as Capital City/Rest of State. A systematic sample of records was selected from each stratum using specified sampling fractions [26]. To include households that were not listed in the White Pages, a second sampling frame comprising 20,000 randomly generated mobile telephone numbers was used. This sampling frame was supplied by Sampleworx and the mobile telephone numbers were created by appending randomly generated suffix numbers to all known Australian mobile prefix numbers. As the mobile numbers did not contain address information, the sampling frame could not be stratified by geographic region. A random sample of mobile numbers was selected from the frame and contacted to establish the main user of the mobile phone. This person was asked to participate in the telephone interview, provided that they were aged 18 years or over. All participant provided verbal consent to participate in the survey and datasets were de-identified to ensure anonymity [26].
Following the completion of the telephone interview survey, participants were invited to respond to the postal questionnaire component. Those who agreed were sent a covering letter with the questionnaire and reply-paid envelope enclosed. A reminder postcard was sent two weeks later, with, if necessary, two additional follow-up letters/questionnaires sent subsequent to the postcard. A total of 6,340 Australian adults aged 18+ years took part in the 2013 NDTIS, with 2,935 (46.3%) completing the follow-up postal questionnaire. Sample characteristics are displayed in Table 1. Two thirds of the sample were 45 to 98 years old and had Technical and Further Education (TAFE) or went to university. Women corresponded to 60.3% of the sample. The majority of participants were born in Australia (76.7%), 12.8% were originally from Europe and 10.5% from the other continents (Asia, Africa and the Americas).
---
Ethical approval
Ethical approval for the study was granted by the University of Adelaide's Human Research Ethics Committee (approval number HS-2013-036).
---
Statistical analysis
Statistical analyses were conducted with R software [27] and R packages lavaan [28], and sem-Tools [29].
Phase 1: Item development. The RRAMS was developed by a group of researchers with expertise on the topics of racism, multiculturalism, and race-related attitudes in Australia. To ensure content validity [30], the scale was based on large surveys carried out in the country that were co-designed by the abovementioned group of researchers. These include the 2015-16 Challenging Racism Project [31] and the 2013 survey of Victorians' attitudes to race and cultural diversity [32]. The initial item development phase consisted in the design of items that reflect the different social ideologies that encompass multiculturalism and race-related attitudes. Discussions among the panel of experts were held until reaching consensus that the items comprehended a varied number of theoretical perspectives underpinning the construct of interest. A second group of experts-not involved in the first development phase-was then consulted for feedback purposes in relation to comprehensiveness and clarity of the items.
The final RRAMS was proposed as comprised by two subscales. The first subscale included six items reflecting theories and social ideologies in agreement with "Anglo-centric/Assimilationist attitudes." It included items reflecting alignment with RWA (e.g., 'We need to stop spreading dangerous ideas and stick to the way things have always been done in Australia'), agreement with SDO ('It is okay if some racial or ethnic groups have better opportunities in life than others'), endorsement of colour-blind racial ideology (e.g., 'We shouldn't talk about racial or ethnic differences'), zero-sum racist thinking (e.g., 'Racial or ethnic minority groups take away jobs from other Australians'), and endorsement of assimilationist ideology (e.g., 'People from racial or ethnic minority groups should behave more like mainstream Australians').
The second subscale comprised six items assessing agreement with "Inclusive/Pluralistic attitudes." It included low compliance to RWA (e.g., 'Some of the best people in our country are those who are challenging our government and ignoring the 'normal' way things are supposed to be done'), low SDO (e.g., 'We should do what we can to create equal conditions for different racial or ethnic groups'), acknowledgment of racism (e.g., People from racial or ethnic minority groups experience discrimination in Australia), acknowledgment of white privilege (e.g., 'Australians from an Anglo background (that is, of British descent) enjoy an advantaged position in our society'), and endorsement of multiculturalism (e.g. "People from racial or ethnic minority groups benefit Australian society"). Besides their theoretical relevance, these constructs have been found to be acceptable and appropriate for assessing population racerelated attitudes in previous national studies in Australia [31,32]. Response options for each item ranged from 'strongly disagree' (0), 'disagree' (1), 'neither agree nor disagree' (2), and 'agree' (3) to 'strongly agree' (4).
Phase 2: Identification of a potential factorial structure. Since the RRAMS was conceptualized to measure agreement with both conformity to the dominant ethnoculture ("Anglocentric/Assimilationist attitudes") and agreement with promotion of ethnic diversity ("Inclusive/Pluralistic attitudes"), an Exploratory Factor Analysis (EFA) was initially run to empirically test this assumption (i.e., that a two-factor solution would underlie the set of items). The factorial solution suggested by the EFA was then confirmed by means of a Confirmatory Factor Analysis (CFA) [33] in an independent sample to avoid capitalization on chance [34,35]. We randomly divided the NDTIS sample into one group for the EFA and another group for the CFA; see Table 1 for the distribution of each subsample according to sociodemographic characteristics. Considering that a sample size with at least 200 participants is sufficient for EFA under normal conditions (medium communalities and at least three items loading on each factor) [36] and CFA has higher sample requirements, 271 participants from the original survey were randomly selected for the EFA.
Factor retention relied on Scree Plot [37] criteria and Parallel Analysis (PA) [38]. In the PA, 1,000 random and resampled datasets with the same number of RRAMS items and respondents were generated. The rationale of the PA is that meaningful factors extracted in the current study should account for more variance than factors extracted from random data [36].
Factor extraction was conducted with Maximum Likelihood [39] and oblique rotation ("direct oblimin") [40]. Items with non-salient factor loadings (.<40) were deleted. Additionally, 100 bootstrapped samples were used to generate factor loadings' 95% confidence intervals [41]. Phase 3: Confirmation of the factorial structure in an independent sample. After a factorial structure was derived from the EFA, the instrument was assessed using CFA in an independent sample (n = 2,443). The estimation method was Weighted Least Squares [42], with a mean-and variance-adjusted (WLSMV) test statistic [43]. Missingness of individual item responses ranged from 0.9% to 2.2%, and this was handled with multiple imputation of 20 datasets using the fully conditional specification method [44]. We imputed information for individuals who responded to at least one item of the RRAMSs (n = 2,714). Rubin's rules [45] were used to pool point estimates and standard errors (SE). To evaluate model fit, the scaled χ 2 was used to test the hypothesis of exact-fit. Additionally, we used approximate fit indices, such as the scaled Comparative Fit Index (CFI) and scaled (for simplicity, the term 'scaled' will be omitted from now on.) Root Mean Squared Error of Approximation (RMSEA). Values of CFI � 0.96 and RMSEA � 0.5 indicate good model fit [46], while 0.5 < RMSEA � 1.0 indicates acceptable fit [35].
Since factorial structures derived from EFA do not necessarily imply good fitting CFA models (e.g. due to cross-loadings or residual correlations) [47], in case the factorial structure had a poor fit, model re-specifications were informed by standardized residuals, Modification Indices (MI) and the Standardized Expected Parameter Change (SEPC) [48]. Completely standardized solutions were reported throughout the paper.
Phase 4: Analysis of measurement invariance. An initial Multigroup CFA [49] was conducted to check if the same configural structure would hold for all sex, age, and educationbased groups-i.e., this was done to check whether configural invariance could be confirmed with the data at hand. The χ 2 , CFI and RMSEA and their previously described cut-off points were used to evaluate configural invariance. The second level of measurement invariance, metric invariance, was assessed to ascertain whether factor loadings were similar across the same groups. The final test, scalar invariance, was used to determine whether item thresholds were equal across sex, age and education. Given that scalar models are nested within metric models, and metric models are nested within configural models, metric and scalar invariance were evaluated through a Likelihood Ratio Test (LRT), namely the Δ χ 2 [50]. The Δ χ 2 statistic was computed in each imputed dataset and pooled according to Li, Meng [51] recommendations (i.e. D2 statistic). When the Δ χ 2 was statistically significant, the ΔCFI [52] was used to evaluate the magnitude of the difference. Models with ΔCFI � -.002 indicated lack of invariance [53]. Whenever measurement invariance was not achieved, tests of partial invariance were conducted [54].
Phase 5: Reliability. Internal consistency was calculated with McDonald's O H [55] and ordinal α [56]. The McDonald's O H has two advantages over the traditional and widely used Cronbach's α: It does not assume (1) tau-equivalence or a (2) congeneric model without correlated errors (i.e. locally independent items) [57]. Furthermore, the ordinal α is reported given that Cronbach's α underestimates reliability in ordinal Likert scales. Adequate methods for calculating ordinal α confidence intervals are not available [58].
Phase 6: Item reduction analysis. In the item reduction analysis, we evaluated inter-item correlations, corrected item-total correlations (CITC) and item difficulties. Inter-item correlations indicate the extent to which all items on a scale are examining the same construct without redundancy. Thus, inter-item correlations should be moderate (i.e. items that measure the same construct but also have unique variances) and items with correlations lower than .20 were considered for deletion [59].
The next step was the evaluation of CITC. One important aspect in instrument development is achieving a good balance between a small number of items (lengthy questionnaires can induce lower response rates [60]) and adequate reliability. A recent study by Zijlmans, Tijmstra [61] showed that the CITC [62] performed better than other methods at identifying which items can be removed while maximizing reliability. Therefore, items with the lowest CITC should be the first to be considered for removal. The corrected item-total correlation needs to be calculated within subscales, since items can only be summed into a total score when they measure the same construct [63]. For this reason, CITCs were calculated after the factorial structure was established (i.e. we had no prior information about which item belonged to which subscale to calculate corrected total scores). Given the ordinal nature of the data, the inter-item correlations and CITCs were investigated with non-parametric Kendall's τ [64].
Finally, due to the limitations of classical difficulty indices such as the p-value (i.e. proportion of correct responses given the total score) [65], we evaluated item difficulty with the LI IRF , the location index based on the item-response function [66]. The LI IRF is calculated based on the item locations (β i ), which are a well-known reparameterization of item thresholds (τ i ) of adjacent i and i +1 response categories [67]. The LI IRF indicates the value of the latent trait in which respondents have an average score of half the maximum item score. For example, in a 5-point rating scale (items ranging from 0 = Strongly Disagree to 4 = Strongly Agree), the LI IRF indicates the level of inclusive/pluralistic attitudes required for participants to score on average 2 (2 = Neutral). In our study, the LI IRF was chosen over item thresholds (τ i ) to convey item difficulty because of two advantages: the interpretation of the LI IRF is (a) easier, since it is a single index compared to four thresholds per item; and (b) more substantive, since it is based on the latent trait ("Anglo-centric/Assimilationist attitudes" or "Inclusive/Pluralistic attitudes") rather than on the latent response variables [68]. Nonetheless, for the sake of completeness, we also reported the item thresholds (τ i ).
Phase 7: Construct validity. To evaluate the RRAMS' construct validity, we investigated known-groups validity according to sex, education and age. Known-groups validity compares the levels of the constructs in different groups (e.g. men compared to women) and should be applied when it is known, theoretically or due to previous empirical research, that these groups differ on the variable of interest. Therefore, known-groups validity can inform whether the instrument is able to discriminate between two groups that are known to be different regarding the construct (e.g. individuals with more education have more inclusive attitudes). Investigation of known-groups validity is important in many instances, such as when there is no "gold-standard" method of measurement to which the instrument can be compared [69]. That is, since there is no "gold-standard" or established (based on robust psychometric evidence) instrument to measure race-related attitudes and multiculturalism in Australia, it is not possible to define what would constitute a good measure for the RRAMS to display convergent validity with. Furthermore, in our case, there is previous evidence of groups that are known to differ according to multiculturalism and race-related attitudes. For example, as multiculturalism can be perceived as identity-threatening by dominant group members [11,19], we expected men to have more conservative attitudes towards multiculturalism when compared to women [22,70]. The same pattern was expected for older participants (>45 years old) when compared to younger respondents [22,70,71]. Participants with a university degree, in turn, were expected to be more supportive of multiculturalism than those with lower educational attainment. This hypothesis is in accordance with previous findings showing that sense of economic security (economic, personal, and cultural), higher education and younger age were associated with more positive attitudes towards multiculturalism and lesser exclusionary attitudes [22,70,71]. Therefore, sex, age and education were chosen as the exogenous variables for the evaluation of known-groups validity. To assess known-groups validity, latent mean differences were calculated by constraining the latent means in one of the groups (i.e. women and participants with higher education) to zero, so this group would function as a reference group. Considering that latent variances were constrained to one in the completely standardized solution, latent mean differences are interpreted as effect sizes analogous to Cohen's [72] d [73]. Finally, we employed the Empirical Bayes model [74] to estimate factor scores, which were plotted using Kernel density [75] to inform not only the average but also the distribution of the latent trait according to groups.
---
Results
---
Identification of a potential factorial structure
Investigation of the Scree Plot and PA indicated that 2 factors substantially explained more variance than factors extracted from randomly generated data (Fig 1).
It should be noted that, although the third factor accounted for more variance than the third factor extracted from the random datasets, the difference was trivial. For this reason, only two factors were retained. The next step consisted of the evaluation of factor loadings (Table 2). Results showed that Item 2 ("Some of the best people in our country are those who are challenging our government and ignoring the 'normal' way things are supposed to be done"), Item 3 ("It is okay if some racial or ethnic groups have better opportunities in life than others") and Item 6 ("We shouldn't talk about racial or ethnic differences") did not have substantial factor loadings (>.40) and were therefore excluded. Item 5 had the smallest factor loadings (λ 2 = 0.440 95% CI [0.220, 0.610]). After deletion of these four items and EFA re-analysis, the two-factor solution achieved simple structure. This time, however, Item 5 did not achieve a substantial factor loading (λ 2 = 0.390; 95% CI [0.180, 0.590]) (S1 Table ); that is, the factors explained only 19% of the variance of item responses ("communality"), while 81% of the variance was explained by other sources ("uniqueness"), such as measurement error. For this reason, Item 5 was also excluded from the analysis.
---
Confirmation of the factorial structure in an independent sample
The 2-factor model was then selected and its fit, examined (χ 2 (19) = 341.070, p<0.001, CFI = 0.974, RMSEA = 0.083; 90% CI [0.076, 0.091]). Since the null hypothesis of exact-fit was rejected (χ 2 (19) = 341.070, p<0.001), we proceeded with indices of approximate-fit. The CFI indicated a good fit to the data (>.960), while the RMSEA was adequate (0.5 < RMSEA � 1.0). Residual correlations are displayed in S2 Table . Considering the overall good fit of the model and that all items exhibited substantial factor loadings (Table 3), the two-factor model with 8 items was accepted. "Anglo-centric/Assimilationist attitudes" (e.g. "Racial or ethnic minority groups take away jobs from other Australians") was regarded as the first subscale, whereas the second comprised six items assessing agreement with "Inclusive/Pluralistic attitudes"
---
Analysis of measurement invariance
Next, measurement invariance by sex, education and age was evaluated (Table 4). Regarding sex, the LRT indicated that the metric model was not statistically different from the configural better. When scalar invariance was evaluated, the pooled Δ χ2 was negative for both educationand age-based groups. Although a negative Δ χ2 is not interpretable (and, therefore, values were set to zero), these negative values can occur when the difference between models are small [76]. For this reason, the threshold constraints were regarded as tenable [77] and provided indirect support for scalar invariance.
---
Reliability
The first subscale "Anglo-centric/Assimilationist attitudes" (O H = 0.83, α ORDINAL = 0.85, α = 0.85; 95% CI [0.84, 0.86]) showed good reliability, while the "Inclusive/Pluralistic attitudes" subscale (O H = 0.77, α ORDINAL = 0.79, α = 0.72; 95% CI [0.70, 0.73]) exhibited adequate reliability.
---
Item reduction analysis
Inter-item correlations ranged from 0.29 to 0.56 (Supplementary 3) and no correlations were lower than 0.20. The CITCs ranged from 0.39 to 0.58. Within the "Anglo-centric/Assimilationist attitudes" subscale, the easiest item was "We need to stop people spreading dangerous ideas and stick to the way things have always been done in Australia" (LI IRF = 0.00), while the hardest item was "Racial or ethnic minority groups take away jobs from other Australians" (LI IRF = 0.72) (Table 3). That is, with respect to Item 10, respondents needed to have 0.72 standard deviations more Anglo-centric/assimilationist attitudes than the average Australian to produce an expected score of 2 out of 4. Item 10 was the hardest item in the "Anglo-centric/Assimilationist attitudes" subscale since its endorsement required more Anglo-centric/assimilationist attitudes than the other items. Within the "Inclusive/Pluralistic attitudes" subscale, the easiest item was "We should do what we can to create equal conditions for different racial or ethnic groups" (LI IRF = -1.58), while the hardest item was "People from racial and ethnic minority groups experience discrimination in Australia." (LI IRF = -0.80). The hierarchy of item difficulties was identical when average item thresholds (À t) were inspected (S4 Table ).
---
Construct validity
Examination
---
Discussion
The current study aimed to present the RRAMS as a measure of attitudes towards multiculturalism in Australia and to examine some of its psychometric properties using data from a nationwide sample. Results showed that the two subscales of "Anglocentric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" are initially valid and reliable for the Australian population. In the initial stage of psychometric assessment, we identified poorly performing items, and these were excluded. One of these was Item 2 ("Some of the best people in our country are those who are challenging our government and ignoring the 'normal' way things are supposed to be done"), an item originally designed to reflect RWA in relation to multiculturalism. Despite its original purpose, Item 2 might not reflect the cultural and racerelated topic in question. This is one possible explanation why the responses to this item were not strongly influenced by respondents' Inclusive/Pluralistic attitudes towards multiculturalism (only 12% of the variance was explained by the supposedly corresponding factor). For instance, the wording "challenging our government" can be interpreted as referring to a general debate not necessarily reflecting ethnic-racial differences on political representation and resources distribution. Future studies might test the item fit by emphasizing 'challenging our government' as pressuring for a political agenda that prioritizes reducing social inequalities among ethnic-racial groups and promotion of a pluralistic society. Items 3 ("It is okay if some racial or ethnic groups have better opportunities in life than others") and 6 ("We shouldn't talk about racial or ethnic differences") also performed poorly and failed to capture assimilationist views. Item 3 was designed to reflect respondent's SDO. It was hypothesized that participants with high SDO, and thus assimilationist views of multiculturalism, would endorse the item. Contrarily to expected, these respondents might have interpreted the phrasing 'some racial or ethnic groups' as a reference to ethnic-racial minorities. Conservatives might perceive affirmative action and social assistance policies as privileges and can endorse the notion that minorities 'have it easy.' Conservative attitudes such as that of RWA and SDO have been linked to social and economic conservatism, reflecting ideologies of competition and meritocracy [78].
The ambiguity left by the item wording can thus explain its failure in discriminating assimilationist attitudes. Item 6, in turn, might have not worked in its subscale because, again contrarily to our hypothesis, respondents with high assimilationist views might be willing to discuss racial and ethnic differences with the intent of promoting assimilationist and racist views [79]. Therefore, the item performed poorly as respondents in the different strata of assimilationist attitudes could be prone do endorse the item for different reasons. The last deleted item was Item 5 ("Australians from an Anglo background [that is, of British descent] enjoy an advantaged position in our society"). One possible explanation for the item's poor performance is that the recognition of privilege does not necessarily informs on inclusive/pluralistic attitudes. For example, a previous study in the Australian states of Queensland and New South Wales showed these as two independent dimensions [9]. The poor loading on the inclusive attitudes subscale suggests that respondents might not link acknowledgment of white privilege to notions of a pluralistic society. Taken together, these results potentially indicate that debates over multiculturalism in Australia need to promote awareness of the connection between Anglo-privilege and racism. Scholars advocate that challenging racism and privilege is as a necessary step towards promoting the abandonment of assimilationist views in favour of more inclusive perspectives [9,13].
The subscales "Anglo-centric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" achieved metric invariance and scalar invariance according to sex. Furthermore, the two subscales achieved metric invariance according education and the results also (indirectly) supported scalar invariance. That is, "Anglo-centric/Assimilationist attitudes" and "Inclusive/ Pluralistic attitudes" influenced the item responses the same way in each group (metric invariance) and the items were not more difficult for one group compared to another (scalar invariance). The RRAMS items can thus be used to compare men/women, participants with/without tertiary education and young/older participants, and the scores will reflect true differences regarding "Anglo-centric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" rather than measurement bias [35].
After ensuring measurement invariance between subgroups, we compared the factor scores between men and women, participants with and without tertiary education, and participants up to and over 45 years of age. The stronger predictor of assimilationist and inclusive attitudes was education, while sex also influenced both constructs. Furthermore, older individuals were more likely to have higher assimilationist attitudes. The role of education in promoting inclusive/pluralistic has been previously established [22,70] and suggests education as an important target for future interventions aimed at promoting multiculturalism in Australia. The results also indicated that men and older individuals had stronger assimilationist attitudes in comparison with women and younger counterparts [71]. In general, the associations of the two subscales with sex, education, and age conformed to the theoretical expectations and provide further evidence of the RRAMS' construct validity.
With regards to reliability, the "Anglo-centric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" subscales showed adequate reliability (>.70) [80], since values between .70 and .80 are considered appropriate for research purposes [81]. In case the RRAMS is used in the future in high-stakes scenarios (i.e. where decisions need to be made based on scale scores) [82], new items should be developed to increase reliability.
In the item reduction analysis, all items displayed moderate inter-item correlations and CITC, so no items needed to be removed. The item with the smallest CITC was Item 7 ("People from racial or ethnic minority groups benefit Australian society"), followed by Item 4 ("We should do what we can to create equal conditions for different racial or ethnic groups."). Since reliability was only modest, we considered that further shortening the scale would be more detrimental in terms of reliability and content validity than beneficial as a means of creating a briefer measure. In addition, with the exception of Item 1 ("We need to stop people spreading dangerous ideas and stick to the way things have always been done in Australia.") and Item 12 ("People from racial and ethnic minority groups should behave more like mainstream Australians."), items difficulties were spread across the latent trait. Once again, although Item 1 or Item 2 could potentially be removed due to similar difficulties, we believe removing additional items would be detrimental to content validity and the psychometric properties of the scale.
One limitation of the current study was that we were not able to evaluate convergent and discriminant validity. The RRAMS was originally applied at the 2013 NDTIS, a study that focused on collecting information on the use of dental services in Australia and did not include other psychosocial measures. For this reason, we considered known-groups validity to be the best strategy to investigate the RRAMS' construct validity. While the results from known-groups validity were in accordance with theoretical expectations (e.g. inclusive attitudes were more present in individuals with more education), future studies need also to investigate other forms of validity, such as convergent/discriminant and predictive validity. For example, future studies should evaluate whether the scores from the "Inclusive/Pluralistic attitudes" subscale are positively correlated (i.e. convergent validity) with scores from other instruments evaluating multiculturalist and inclusive attitudes. Our analyses did not account for sampling weights, meaning that our sample is not representative of the Australian population. It is important to highlight, however, that our study included Australians from all age groups and socioeconomic backgrounds across all states and territories of the country. Furthermore, to the best of our knowledge, this is the largest sample in which a measure of attitudes towards multiculturalism has been employed in Australia. Lack of representativeness and its implications to the validity of scientific findings are central to longstanding discussions in the literature [83]. Because the purpose of the current analysis was to assess the psychometric properties of the RRAMS, as opposed to purely describe prevalence estimates, we do not believe that the lack of representativeness of our sample limits the validity of inferences made here. The fact that a study sample is representative of some larger population does not mean that the associations between variables in the sample will apply to every subgroup of the population. The overall association is simply an average value that has been balanced according to the distribution of people in these subgroups. If a sample that is representative of the sex distribution in the target population, the results will not necessarily be apply to both males and females, but only to a hypothetical participant that is "weighted" on sex. Subgroups analyses are necessary if one wishes to investigate relationships between variables by subgroups, which we have performed during the criterion validity assessment stage.
In conclusion, we successfully developed a comprehensive race-related attitudes and multiculturalism scale to the Australian context. We used robust, cutting edge psychometric techniques and data from a large, nation-wide survey. The small number of items (eight) means the instrument will likely be readily used by policy makers and in ensuing research. Future studies should assess the scaling properties of the instrument by using parametric and nonparametric Item Response Theory techniques. The instrument may, nevertheless, be useful to inform on multiculturalism attitudes across the country and hopefully contribute to a public debate aimed to promote multiculturalist inclusive attitudes with the potential to increase social cohesion in Australia.
---
The authors do not have permission from the ethics committee to publicly release the datasets of the 2013 NDTIS in either identifiable or de-identifiedform. However data is available to bona fide researchers provided that all privacy |
Emerging research has shown that those of sexual-minority (SM) status (i.e., those exhibiting same-sex sexuality) report lower levels of psychological well-being. This study aimed to assess whether this relation is largely in place by the onset of adolescence, as it is for other social statuses, or whether it continues to emerge over the adolescent years, a period when SM youth face numerous challenges. Moreover, the moderating influence of sexual orientation (identification), early (versus later) reports of same-sex attractions, and gender were also examined. Using data from Add Health, multiple-group latent growth curve analyses were conducted to examine growth patterns in depressive affect and self-esteem. Results suggested that psychological well-being disparities between SM and non-SM were generally in place by early adolescence. For many, the remainder of adolescence was a recovery period when disparities narrowed over time. Early and stable reporting of same-sex attractions was associated with a greater initial deficit in psychological well-being, especially among males, but it was also associated with more rapid recovery. Independent of the timing and stability of reported same-sex attractions over time, actual sexual orientation largely failed to moderate the relation between SM status and psychological well-being. Importantly, the sizable yet understudied subgroup that identified as heterosexual but reported same-sex attractions appeared to be at substantial risk. | Why Are Middle Childhood and Early Adolescence So Important?
While there may be many reasons why middle childhood is an important developmental period with respect to the relation between social status and psychological well-being, two likely reasons for its importance are (1) advances in cognitive development during this period that render one's social status(es) more personally relevant to one's sense of self and (2) increases in the size and instability of the peer network.
---
Advances in cognitive development
Research has shown that the relationship between social status and psychological well-being is an indirect one and is, in part, mediated by the messages (both positive and negative) one receives regarding one's social status(es) (Fordham & Ogbu, 1986;Mays & Cochran, 2001;McLeod & Owens, 2004;Van Laar, 2000). Importantly, the extent to which those messages are internalized is directly related to their influence on psychological well-being (Herek & Garnets, 2007;Steele, 1997;Williams & Williams-Morrris, 2000). What is often overlooked is that the following three cognitive abilities must be acquired before the messages regarding one's social statuses can be internalized: (1) an awareness of the categories one belongs to;
(2) the ability to perceive messages from others and society regarding the categories one belongs to; and (3) the ability to internalize membership in those categories as personally meaningful. Harter's (2006;1996) extensive research on the self suggests that it is not until middle childhood that youth have acquired the last of these three abilities. During middle childhood and early adolescence (8 -12 years of age), children's cognitive ability to use more objective criteria and inter-individual comparisons for self-evaluation increases (Harter, 1996;Stipek & MacIver, 1989). The self also becomes more objective and outward focused. This newfound cognitive capacity enables youth to more fully link their attributes, including the social categories to which they belong, to how they actually feel about themselves (Davis-Kean, Jager, & Collins, 2009).
---
Increases in the size and instability of the peer network
The size (Cairns, Xie, & Leung, 1998) and instability (Cairns & Cairns, 1994;Nash, 1973) of peer networks peak during middle childhood. Cairns, Xie, & Leung (1998) contend that the increases in the size and instability of peer networks may be, at least in part, driven by changes associated with the transition to middle school, which may result in greater opportunities for interaction with a wider range of peers. As a consequence, just when youth are beginning to base their sense of personal value on inter-individual comparisons and using peers as a "social mirror" (Sullivan, 1953), they are also interacting with more peers and are more likely to be interacting with those peers for the first time. The combination of changes in cognition as well as changes in social context may underlie the emergence of disparities in psychological well-being across the levels of social status during middle childhood and early adolescence.
---
Sexual-Minority Status
In terms of the emergence of disparities in psychological well-being, it is not clear whether middle childhood is an important time period for SM as it is for race, gender, overweight status, and SES. An important question regarding SM status is as follows: When does one's awareness of his or her SM status emerge? Is it early on in development like one's awareness of race and sex, or is it later on in development like one's awareness of being a college student or a parent?
Retrospective reports indicate that SM individuals recall being treated differently by others, often as early as age 8, before they develop or are even aware of their attractions to the same sex (Bell, Weinberg, & Hammersmith, 1981;Zucker, Wild, Bradley, & Lowry, 1993). They recall feeling different from their peers, and often this sense of feeling different has a negative valence and is centered around atypical, gender-related traits (Savin- Williams, 2005;Troiden, 1989). Retrospective reports also indicate that around the age of 10 or 11, many SM individuals recall their first awareness of attraction to the same sex (D'Augelli & Hershberger;1993;Floyd & Stein, 2002;Friedman, Marshal, Stall, Cheong, & Wright, 2008;Rosario, Meyey-Bahlburg, Hunter, & Exner, 1996;Savin-Williams & Diamond, 2000). Thus, there is some evidence to suggest that awareness of one's SM status may emerge during the middle childhood years. As such, just as was the case for race, sex, overweight status and SES, the years between middle childhood and early adolescence may prove quite formative with respect to the relation between SM status and psychological wellbeing.
Though one may acquire a vague sense of SM status during middle childhood (i.e., a sense of difference or initial awareness of feelings of same-sex attraction), coming to grips with one's own sexuality does not end there. A subset of youth go on to realize during early adolescence that this attraction to the same sex is what society deems as homosexual, and then an even smaller subset go on to actually identify themselves (as opposed to just their sexual attractions) as homosexual or bisexual (D'Augelli & Hershberger;1993;Rosario et al., 1996;Savin-Williams & Diamond, 2000). Awareness of sexual-minority status is a prerequisite for others' messages regarding sexual minorities to be internalized as personally meaningful, and the period when one's awareness of his or her SM status appears to form extends into late adolescence or even early adulthood. Thus the relation between SM status and psychological well-being may itself be in flux through late adolescence/early adulthood.
Complicating things further is the possibility that growing awareness of one's SM status during adolescence will be accompanied by social isolation as well as victimization and stigmatization. In both the school and the home, many SM adolescents report feeling invisible (Garofalo et al., 1998;Hershberger & D'Augelli, 1995) and have a difficult time finding other SM adults to confide in or other SM peers with whom to socialize (Herek & Garnets, 2007;Lewis, Derlega, Berndt, Morris, & Rose, 2001;Mills, Paul, Stall, Pollack, & Canchola;Morris, Waldom & Rothblum, 2001). Beyond feeling a level of invisibility, SM adolescents also face a disproportionate amount of peer harassment, bullying, and aggression from their non-minority peers (Herek & Sims, 2007;Mays & Cochran, 2001;Russell & Joyner, 2001). Thus at a time when SM adolescents are coming to grips with their status, they are typically doing so alone, perhaps in the face of heightened harassment and aggression. As a consequence, the influence of SM status on psychological well-being may prove stronger between mid-to late-adolescence than between middle childhood and early adolescence.
---
Moderators of Sexual-Minority Status and Psychological Well-Being
Available cross-sectional research has identified three factors that moderate the relation between SM status and psychological well-being: (1) sexual identification or orientation, (2) age of first awareness/disclosure, (3) and gender status. Importantly, to date the extent to which, if at all, these factors moderate the relation between SM status and growth in psychological well-being is unknown.
---
Sexual identification
While all those that exhibit same-sex sexuality share the status of SM, they vary dramatically as to whether or not they hold a SM identity as well as the nature of that identity, if any. Among those exhibiting same-sex sexuality, some identify as heterosexual, some as homosexual, and others as bisexual (Diamond, 2006). This heterogeneity in identification among those who exhibit same-sex sexuality could have implications for the relation between SM status and psychological well-being. For example, researchers who conceptualize sexual-identity formation as a progression through a set of stages have found among SM that those in the later stages report higher psychological well-being than do those in the earlier stages (Brady & Busse, 1994;Halpin & Allen, 2004;Levine, 1997). Researchers have also found among those who identify as bisexual or homosexual, acceptance of one's sexual identity is positively related to mental health (Hershberger & D'Augelli, 1995;Miranda & Storms, 1989;Rosario, Hunter, Maguen, Gwadtz, & Smith, 2001). Finally, there is some evidence to suggest that, relative to those who identify as homosexual, those who identify as bisexual may be at higher risk for deficits in psychological well-being (Balsam, Beauchaine, Mickey, & Rothblum, 2005;Jorm, Korten, Rodgers, Jacomb, & Christensen, 2002).
---
Age of first awareness/disclosure
The research above indicates that coming to terms with one's sexual orientation and integrating it within one's sense of self is associated with higher psychological well-being. However, the extent to which this is the case may vary with age. There are risks associated with disclosing your sexual orientation to others, such as increased victimization, the disruption of close personal relationships, and heightened disapproval from others (Corrigan & Mathews, 2003;McDonald, 2008). For some, these risks can outweigh the benefits of coming to terms with one's sexual orientation (Corrigan & Mathews, 2003;Friedman et al., 2008). Emerging research suggests that one factor related to whether or not the risks outweigh the benefits is age of first awareness or disclosure. For example, relative to those who progress through these milestones at a later age, those who are aware of their same-sex attractions or disclose their sexual orientation at younger ages report experiencing more gayrelated discrimination, bullying, and disrupted relationships during adolescence, and they generally have fewer resources, both interpersonal and intrapersonal, to cope with these threats (D'Augelli & Hershberger, 1993;Friedman et al., 2008;Remafedi, 1991;Savin-Williams, 1995). The increase in threats coupled with the decrease in sources of support is thought to translate into lower psychological well-being among those who progress through these milestones at an earlier age (Friedman et al., 2008;McDonald, 2008). In fact, Friedman et al. (2008) found that, relative to those who were first aware of same-sex attractions at an older age (adolescence), those who were first aware at an earlier age (middle childhood) reported lower psychological well-being and physical health during adulthood.
---
Gender
The relation between gender and psychological well-being appears to be muted among the sexual-minority population. That is, relative to the general population, where females tend to report lower levels of psychological well-being than males (Twenge & Nolen-Hoeksema, 2002), the gender differences within the SM population are diminished or absent (Balsam et al., 2005;Cochran et al., 2003, Elze, 2002;Fergusson et al., 2005).
---
Hypotheses and Key Questions
Though this study was partly exploratory in nature, the following hypotheses guided our examination. 1a) By early adolescence, we expected SM youth to report lower levels of psychological well-being than those of sexual-majority status; 1b) Disparities in psychological well-being among SM and sexual-majority individuals were predicted to increase during adolescence. By comparing the size of disparities at early adolescence (i.e., hypothesis 1a) to the extent, if any, that those disparities increase over adolescence (i.e., hypothesis 1b), we evaluated the relative influence of middle childhood and adolescence on the relation between SM status and psychological well-being. 2) Among those of SM status, we expected those of bisexual status to report lower psychological well-being at the onset of adolescence as well as lower growth in well-being across adolescence. 3a) In terms of initial status differences and growth differences, we expected earlier awareness of same-sex attractions to be associated with lower psychological well-being; and 3b) we expected that the disparities in psychological well-being between SM and non-SM would be larger among those SM reporting earlier awareness of same-sex attractions. (4) In terms of both intercept differences and growth differences, we expected psychological well-being disparities between SM and non-SM to be more pronounced among males.
---
Methods
---
Sample
The data for this study came from the National Longitudinal study of Adolescent Health (Add Health; Bearman et al., 1997), a multi-wave, nationally representative sample of American adolescents. Using a clustered sampling design, 80 high schools were recruited for participation. The sample of schools was stratified by region, urbanicity, school type, ethnic mix, and size. At the point of initial assessment (Wave 1), the total sample was 20,745 7 th -12 th graders. Two additional waves of data are available, each taking place approximately one (Wave 2) and six years later (Wave 3). The sample and retention rates for each wave are 14,988 (72%) and 15,170 (73%) respectively. For the present study, only those respondents who completed a sexual orientation measure at Wave 3, completed samesex attraction measures at Waves 1, 2, and 3, had data for age, and were assigned a sample weight were included in the study (N = 7,733).
With respect to psychological well-being, respondents included in the study (n = 7,733) reported slightly lower levels of depression at Wave 1, t (20,703) = 2.44, p < .05, and Wave 3, t (15,233) = 3.96, p <.001, than those not included in the study (n = 12,970). Those included in the study also reported slightly higher levels of self-esteem at Wave 1, t (20,681) = 2.99, p < .01; and Wave 2, t (14,726) = 3.10, p < .01. In every case where differences in psychological well-being were found, effect sizes were small. (No R 2 was larger than .005.) Finally, males (Χ 2 (1) = 75.28, p < .001), and those in the older cohort (Χ 2 (1) = 18.08, p < . 001) were underrepresented among those included in the study.
Among those included in the study (n = 7,733), the amount of missing data on the psychological well-being indices was low (less than 0.1% at each Wave). In order to maximize the data and include all possible cases, we used Full Information Maximum Likelihood (FIML) estimation, a missing data algorithm available within Mplus (Muthen & Muthen, 1998-2009).
---
Procedure
The first wave of data was collected during 1994 and 1995 via in-home questionnaires. The questionnaires covered a range of topics: health status, nutrition, peer networks, family composition and dynamics, romantic partnerships, sexual partnerships, and risk behavior. Approximately a year later, respondents completed a second in-home questionnaire that was similar in content. Approximately six to seven years after initial assessment, respondents completed a third in-home questionnaire, one that was similar in content to the first but also covered such topics as romantic relationships, child-bearing, and educational histories.
---
Measures
Psychological well-being-We focused on two indices of psychological well-being: depressive affect and self-esteem. Depressive affect was based on a 9-item, truncated version of the CES-D (Radloff, 1977). An example item is: "During the past week, have you been bothered by things that usually do not bother you?" The possible range was from 0 to 3, with higher responses indicating higher levels of depressive affect. Cronbach alphas were .79, .79, and .80 for waves 1, 2, and 3 respectively. Self-esteem was based on a 4-item scale used previously by Regnerus & Elder (2003). An example item is: "You like yourself just the way you are." The possible range was from 1 to 5, with higher responses indicating higher levels of self-esteem. Cronbach alphas were .83, .81, and .79 for waves 1, 2, and 3 respectively.
Sexual orientation and sexual-minority status-Based on the distinction between SM status (those exhibiting versus those not exhibiting same-sex sexuality) and sexual orientation (those identifying versus those not identifying as a SM), we classified individuals into one of four groups. Classification was based on a single question that was asked at Wave 3 only: "Please choose the description that best fits how you think about yourself." The possible responses were: (a) 100% heterosexual (straight); (b) mostly heterosexual (straight), but attracted to people of your own sex; (c) bisexual -that is attracted to men and women equally; (d) mostly homosexual (gay), but somewhat attracted to people of the opposite sex; (e) 100% homosexual (gay); and (f) not sexually attracted to either males or females. All those who indicated no sexual attraction (response f) were dropped from analyses (n = 74), as were those who refused to answer the question (n = 73). All who identified themselves as 100% heterosexual (response a) were classified as Heterosexualidentified/non-SM (n = 6,889). All who indicated some level of same-sex sexuality (responses b, c, d or e) qualified as a SM (n = 844). Of these individuals, those who identified as gay (responses d and e) were classified as Homosexual-identified/SM (n = 129), those who identified as bisexual (response c) were classified as Bisexual-identified/SM (n = 140), and those who identified as straight but indicated an attraction to the same sex (response b) were classified as Heterosexual-identified/SM (n = 575).
Instability of reported same-sex attractions-At Wave 1 respondents were asked two yes/no questions: (1) "Have you ever had a romantic attraction to a female?" and (2) "Have you ever had a romantic attraction to a male?" For Waves II and III respondents were asked the same questions but were asked to indicate if they experienced these attractions since the last time they were interviewed. Using the reported same-sex attraction (or lack thereof) associated with one's Wave 3 sexual orientation as the reference point, we created three dummy variables to assess instability in same-sex attraction -one for each wave. Among those who indicated a sexual orientation at Wave 3 that included same-sex attractions (i.e., Heterosexual-identified/SM; Bisexual-identified/SM; and Homosexualidentified/SM), a report at any given wave (i.e., Waves 1, 2, or 3) of no same-sex attractions was coded as 1, and a report of same-sex attractions was coded as 0. The opposite pattern was true for Heterosexual-Identified/non-SM (the only group that reported a sexual orientation at Wave 3 that did not include same-sex attractions). For this group a report of same-sex attractions at any given wave was coded as 1, while a report of no same-sex attraction was coded as 0.
In concrete terms, relative to the reported same-sex attraction (or lack thereof) associated with one's Wave 3 sexual orientation, these dummy variables were an indication of inconsistency in reported same-sex attraction, with 1 indicating inconsistency and 0 indicating consistency. The Wave 3 instability dummy variable likely reflected confusion or measurement error, either in the Wave 3 sexual orientation measure or the Wave 3 questions pertaining to attraction to each sex. In contrast, the Wave 1 and 2 instability dummies may have reflected developmental changes or instability in awareness of and/or willingness to report same-sex attractions. For example, among those reporting a sexual orientation at Wave 3 that includes same-sex attractions, those who also reported same-sex attractions at Waves 1 and/or 2 may have become aware of their same-sex attractions at an earlier age than those who did not report same-sex attractions at Waves 1 and 2.
Consistent with previous research (Russell, 2006), preliminary analyses revealed that the independent influence of instability in reported same-sex attractions at Waves 1 and 2 on psychological well-being was modest and non-systematic. However, additional preliminary analyses indicated that (1) instability in same-sex attractions at both Waves 1 and 2 was strongly predictive of psychological well-being, (2) the influence of instability at Waves 1 or 2 (but not both) was modest, and (3) the influence of instability at Wave 3 was often nonsignificant. Based on these preliminary findings, we chose the three following dummy variables: (1) instability at Waves 1 and 2 versus all others; (2) instability at Waves 1 or 2 (but not both) versus all others; and (3) instability at Wave 3 versus all others. When each of these dummy variables were included as controls, the reference group became those who reported same-sex attractions (or lack thereof) over time that were consistent with their Wave 3 sexual orientation and the same-sex attractions (or lack thereof) that they reported along with that sexual orientation.
Cohort-Although age at Wave 1 ranged between 12 and 20 years of age, over 95% of the sample ranged between 13 and 18 (M = 15.60, SD = 1.73). We dichotomized the sample so that we could more closely examine how the relation between SM status and psychological well-being varied across adolescence. A dichotomous cohort variable was created: Those between the ages of 12 and 15 (51% of the sample) were classified as young, whereas those between the ages of 16 and 20 (49% of the sample) were classified as old.
Gender status was based on self-report. Respondents indicated whether they were male (0) or female (1).
---
NIH-PA Author Manuscript
---
NIH-PA Author Manuscript
---
NIH-PA Author Manuscript
---
Results
---
Basic Descriptive Statistics
The means, standard deviations, sample size, and relative percentage for each of the four sexual orientation groups (i.e., Heterosexual-identified/non-SM; Heterosexual-identified/ SM; Bisexual-identified/SM; and Homosexual-identified/SM) are listed in Table 1. The percentages and frequencies for unstable and stable reports of same-sex attractions are listed in Table 2. Patterns of same-sex attraction across Waves 1 and 2 are listed in the first three columns. In the sample as a whole, 83.1% reported same-sex attractions at both Waves 1 and 2 that were consistent with the same-sex attractions (or lack thereof) associated with the sexual orientation that they reported at Wave 3. The remaining 16.9% of respondents reported Wave 1 and Wave 2 same-sex attractions that were inconsistent with the sexual orientation they reported at Wave 3: 8.7% were inconsistent at both waves and 8.2% were inconsistent at only a single wave. Generally, instability in these factors was higher among those of SM status. Wave 3 patterns of same-sex attractions are listed in the last two columns of Table 2. In the sample as a whole, 94.3% reported same-sex attractions at Wave 3 that were consistent with the sexual orientation that they reported at Wave 3. The remaining respondents (5.7%) reported same-sex attractions that were inconsistent. Generally, instability in these factors was higher among the Heterosexual-Identified/SM group. The last column of Table 2 lists those who reported same-sex attractions across all three waves that were consistent with the sexual orientation that they reported at Wave 3.
---
Sexual Orientation at Wave 3 and Adolescent Trajectories of Psychological Well-Being
In order to examine adolescent trajectories of psychological well-being, we used the growth curve model presented in Figure 1. The factor coefficients for the linear slope were set at 0, 1, and 6.5 because the average time between Waves 1 and 2 was 1 year, and the average time between Waves 1 and 3 was 6.5 years. The intercept factor measured initial (Wave 1) levels of psychological well-being, whereas the slope factor measured linear change in psychological well-being across Waves 1, 2, and 3. We used multiple-group analyses (Duncan, Duncan, Strycker, Li, & Alpert, 1999) to examine model differences across the four sexual orientation subgroups. All analyses were conducted within Mplus, Version 5.2 (Muthen & Muthen, 1998-2009). In order to account for Add Health's sampling design, we included a stratification variable and used a maximum likelihood estimator that is robust to the estimate of standard errors, as suggested by the administrators of Add Health when using Mplus (Chantala, 2003). All multi-group comparisons were based on Χ 2 differences tests. When we conducted multi-group comparisons, only the model parameter of focus was constrained to be equal across the groups. Unless otherwise specified, all other model parameters (e.g., means, variances, and covariances) were free to vary across groups. Because ordinary Χ 2 difference tests cannot be computed when using a robust maximum likelihood estimator (Muthen & Muthen, 1998-2009), differences in model fit were tested via the equations provided by Satorra and Bentler (1999). Due to space constraints, fit indices are not presented for each growth model, though in every case the fit was excellent (i.e., CFI > .95 and RMSEA < .05; McDonald & Ringo Ho, 2002).
Depressive affect-Pertinent results are listed in the first two columns of Table 3. Among the entire sample, intercept levels of depressive affect were low (i.e., .638 on a scale of 0 to 3), and growth in depressive affect was negative (-.022). Intercept levels of depressive affect were equivalent across the three SM groups, ΔΧ 2 (2) = .342 , p = .84. However, collectively the three SM groups reported higher intercept levels of depressive affect (.778) than Heterosexual-Identified/non-SM (.619), ΔΧ 2 (1) = 277.17, p < .001. Among the three SM groups, growth of depressive affect was more negative among the Bisexual-identified/SM (-.021) and Homosexual-identified/SM (-.029) groups than it was among the Heterosexual-identified/SM group (-.010), ΔΧ 2 (1) = 4.27 , p < .05. Also, only the Heterosexual-identified/SM group differed from the Heterosexual-Identified/non-SM group (-.023), ΔΧ 2 (1) = 5.052 , p < .05. In sum, at intercept the three SM groups did not differ from one another, but they collectively reported higher levels than Heterosexual-Identified/non-SM. For Heterosexual-identified/SM these initial differences increased over time, but for Homosexual-identified/SM and Bisexual-identified/SM these differences remained stable over time.
Self-esteem-In the sample as a whole, intercept levels of self-esteem were high (i.e., 4.085 on a scale of 1 to 5), and growth in self-esteem was positive but moderate (.019). Intercept levels of self-esteem were equivalent across the three SM groups, ΔΧ 2 (2) = .790, p = .67. However, collectively the three SM groups reported lower intercept levels of selfesteem (3.910) than did Heterosexual-identified/non-SM (4.108), ΔΧ 2 (1) = 94.05, p < .001. With respect to growth in self-esteem, none of the four sexuality groups differed from one another.
The influence of instability in reported same-sex attractions-The above analyses suggested that reported sexual orientation during early adulthood (i.e., Wave 3) was associated with psychological well-being during adolescence. Next we examined (1) whether instability in reported same-sex attractions was related to adolescent patterns of psychological well-being and (2) whether that instability influenced the relation between declared sexual orientation at Wave 3 and psychological well-being during adolescence. We did so by repeating the analyses above but including the following instability dummy variables as exogenous predictors of each growth factor: (1) unstable at Waves 1 and 2 (column 1 of Table 2), (2) unstable at Wave 1 or 2, but not both (column 2 of Table 2), and (3) unstable at Wave 3 (column 4 of Table 2). By including these dummy variables in the growth model, the reference group among the SM groups became those who reported stable same-sex attractions across all three waves, and the reference group among the Heterosexual-identified/non-SM group became those who consistently reported no same-sex attractions (column 6 of Table 2).
The influence of the three instability dummy variables on each psychological well-being growth factor is presented in Table 4. Based on multi-group analyses, the relation between the instability dummy variables and depressive affect did not differ across the three SM groups. However, the relation did differ between the SM groups and Heterosexualidentified, non-SM. The same was true for self-esteem. Consequently, in Table 4 the results are listed for Heterosexual-identified, non-SM and for the three SM groups combined, but they are not listed separately for each of the three SM groups. Focusing first on SM, in reference to those who persistently reported same-sex attractions at all three waves, those who reported no same-sex attractions at Waves 1 and 2 reported higher psychological wellbeing at intercept (i.e., lower depressive affect and higher self-esteem). However, they reported smaller increases in psychological well-being over time. Among Heterosexual-Identified/non-SM the relation between instability in reported same-sex attractions was much more muted, with those reporting same-sex attractions at both Waves 1 and 2 reporting lower depressive affect at intercept. Controlling for instability in reported same-sex attractions did alter the relation between reported sexual orientation at Wave 3 and adolescent psychological well-being. Pertinent results are in the third and fourth columns of Table 3. Concerning depressive affect, intercept levels among the Heterosexual-identified/SM group and the Bisexual-identified/ SM group were equivalent, ΔΧ 2 (1) = 2.65, p = .11. Collectively, however, they were higher than levels of depressive affect among both the Homosexual-identified/SM group, ΔΧ 2 (1) = 4.06, p < .05, and the Heterosexual-Identified/non-SM group, ΔΧ 2 (1) = 358.96, p < .001. In addition, the Homosexual-identified/SM group reported higher intercept levels than the Heterosexual-Identified/non-SM group, ΔΧ 2 (1) = 163.41. Taken together, at intercept the Heterosexual-Identified/non-SM group reported the lowest depressive affect, followed by the Homosexual-identified/SM group, followed by the Heterosexual-identified/SM and Bisexual-identified/SM groups, who reported equivalent levels to one another as well as the highest levels overall. Growth in depressive affect was equivalent across the three SM groups, ΔΧ 2 (2) = 1.141, p = .56. However, declines in depressive affect over time were more evident among the SM groups (-.072) than among the Heterosexual-Identified/non-SM group (-.023), ΔΧ 2 (1) = 38.79, p < .001. There were fewer group differences in selfesteem. At intercept the three SM groups reported equivalent levels of self-esteem, ΔΧ 2 (2) = 2.91, p = .23, but collectively they reported lower levels of self-esteem than the Heterosexual-Identified/non-SM group, ΔΧ 2 (1) = 67.84, p < .001. There were no group differences in the growth of self-esteem.
Summary-Wave 3 sexual orientation was associated with psychological well-being. It appeared to have a stronger relation with intercept levels than with growth, with SM reporting lower psychological well-being at intercept. Among the SM groups, early and stable reporting of same-sex attractions was associated with lower initial levels of psychological well-being but greater increases in psychological well-being over time. Within the Heterosexual-Identified/non-SM group, early and stable reporting of no same-sex attractions was associated with lower initial levels of depressive affect. Relative to cases of unstable same-sex attractions, the relation between Wave 3 sexual orientation and adolescent depressive affect was different among those who reported stable same-sex attractions. Specifically, after controlling for instability in reported same-sex attractions, the discrepancy between SM and Heterosexual-Identified/non-SM was larger at the intercept; however, SM also reported greater increases in psychological well-being over time relative to Heterosexual-Identified/non-SM. Thus, relative to those reporting unstable sexual attractions over time, among those reporting stable sexual attractions over time, the initial gap in psychological well-being between SM and Heterosexual-Identified/non-SM was larger; however, that gap also closed at a faster rate over time.
---
Sexual-Minority Status and Psychological Well-Being: Cohort and gender differences
Building on earlier analyses, we next examined whether the relation between same-sex sexuality and psychological well-being varied across cohort and gender. Preliminary analyses indicated that cohort differences and gender differences in psychological wellbeing were equivalent across the three SM groups. Consequently, for this portion of the analyses we did not distinguish between the individual SM groups but instead compared all SM to the Heterosexual-Identified/non-SM group.
Cohort-In order to examine differences across cohort, we used a cohort-by-SM-status grouping variable that broke respondents into four groups: (1) young Heterosexual-Identified/non-SM; (2) old Heterosexual-Identified/non-SM; (3) young SM; and (4) old SM. When using this grouping variable, we used the model constraint command within Mplus (Muthen & Muthen, 1998-2009), which allows for the creation of new model parameters based on mathematical operations involving already existing model parameters. Using the model constraint command we created four new model parameters: (1) A young intercept difference score [(intercept estimate for young SM) minus (intercept estimate for young Heterosexual-Identified/non-SM )]; (2) an old intercept difference score [(intercept estimate for old SM) minus (intercept estimate for old Heterosexual-Identified/non-SM)]; (3) a young growth difference score [(growth estimate for young SM) minus (growth estimate for Heterosexual-Identified/non-SM)]; and (4) an old growth difference score [(growth estimate for old SM) minus (growth estimate for old Heterosexual-Identified/non-SM )]. Note that these difference scores represented the model factor for SM relative to the model factor for Heterosexual-Identified/non-SM. Thus a negative value indicated that the SM factor was lower, whereas a positive value indicated that the SM factor was higher.
Through a series of focused model comparisons, we examined whether these difference scores varied across the young and old cohorts. Specifically, based on Χ 2 difference tests, we compared the fit of a model where the young intercept difference score and the old intercept difference score were constrained to be equal to the fit of a model where they were not constrained to be equal. We conducted a similar model comparison for the young growth difference score and the old growth difference score.
We used this approach because it allowed for the examination of a two-way interaction (cohort by SM status) while allowing the relation between instability in reported same-sex attractions and psychological well-being to vary across groups. We conducted analyses with and without controlling for instability in reported same-sex attractions. We examined differences in depressive affect and self-esteem in separate models. Results are listed in Table 5, where significant differences are indicated by a superscripted number.
When not controlling for instability in reported same-sex attractions, the young growth difference score was larger than the old growth difference score, ΔΧ 2 (1) = 6.17, p < .05. Among the young cohort, growth in depressive affect was more positive among SM than among Heterosexual-Identified/non-SM (.017). Among the old cohort, however, growth in depressive affect was equivalent across the two groups (-.007).
Preliminary analyses revealed that the relation between instability in reported same-sex attractions and both depressive affect and self-esteem was equivalent across cohort for Heterosexual-Identified/non-SM. For SM we found that the relation between instability in reported same-sex attractions was equivalent across cohort for depressive affect, but it varied across cohort for self-esteem. The relation was more pronounced among the young cohort, as shown in Table 6. Based on these preliminary findings, we constrained the relation between instability in reported same-sex attractions and psychological well-being to be equal across cohort (except for SM and self-esteem, where the relation varied across cohort). As in earlier analyses, we allowed the relation between instability in reported samesex attractions and psychological well-being to vary across Heterosexual-Identified/non-SM and SM. When controlling for instability in reported same-sex attractions, the relation between SM status and depressive affect did not vary across cohort. However, for selfesteem the intercept difference score, ΔΧ 2 (1) = 7.13, p < .01, and the growth difference score, ΔΧ 2 (1) = 5.14, p < .05, were much larger among the young cohort, and only among the young cohort were these difference scores significantly different from zero. More specifically, only among the young cohort did those of SM status have, relative to Heterosexual-Identified/non-SM, lower self-esteem at intercept (-.734) but greater increases in self-esteem over time (.100).
Gender-In order to examine gender-by-SM differences, we used the same analytic strategy that we used to examine cohort-by-SM status differences, except that we used a different grouping variable. The gender-by-SM status grouping variable broke respondents into four groups: (1) male Heterosexual-Identified/non-SM; (2) female Heterosexual-Identified/non-SM; (3) male SM; (4) and female SM. Results are listed in Table 5. Again, significant differences in difference scores are indicated by a superscripted number in Table 5.
When not controlling for instability in reported same-sex attractions, depressive affect growth difference scores were not equivalent among males and females, ΔΧ 2 (1) = 4.36, p < .05. More specifically, among females growth in depressive affect was more positive among SM than among Heterosexual-Identified/non-SM (.015). Among males, however, growth in depressive affect did not differ across Heterosexual-identified/non-SM and SM (-.002). The relation between SM status and self-esteem did not vary across gender.
Preliminary analyses revealed that the relation between instability in reported same-sex attractions and both depressive affect and self-esteem was equivalent across gender for Heterosexual-Identified/non-SM. However, the relation between instability in reported same-sex attractions and both depressive affect and self-esteem varied across gender. The relation was more pronounced among males, as shown in Table 6. The relation between instability in reported same-sex attractions and psychological well-being was thus constrained to be equal across gender for Heterosexual-Identified/non-SM and was allowed to vary across gender for SM. Again, we allowed the relation to vary across Heterosexual-Identified/non-SM and SM as well. When controlling for instability in reported same-sex attractions, the relation between SM status and psychological well-being did not vary across gender.
Summary-The relation between SM status and psychological well-being varied across both cohort and gender. In the case of depressive affect, patterns evident among the entire sample when instability controls were not included (i.e., greater increases in depressive affect over time among SM -Heterosexual-identified/SM in particular) were more evident among those in the young cohort and females. However, in the case of self-esteem, patterns found among the entire sample (i.e., intercept differences across SM and Heterosexual-Identified/non-SM) were more evident among the young cohort. A pattern that was not evident among the entire sample emerged as well: Among the entire sample there was no instance when growth in self-esteem varied across any of the sexual orientation groups. However, among the young cohort, growth in self-esteem was more positive among SM. Growth in self-esteem was equivalent across SM status among the old cohort. This differential growth pattern across cohort only emerged when controls for instability in reported same-sex attractions were included. Finally, the relation between early and stable reports of same-sex attractions and psychological well-being (i.e., lower initial levels but greater increases over time) was more pronounced among males.
---
Discussion
Overall, four main conclusions can be drawn from this study: (1) Psychological well-being disparities between SM and non-SM are in place by early adolescence, and then for many the remainder of adolescence is a recovery period when the disparities narrow over time. (2) Early and stable reporting of same-sex attractions is associated with a greater initial deficit in psychological well-being, but because it is also associated with a quicker recovery over time, the effects are often not long lasting. (3) Though the relation between sexual orientation during early adulthood (i.e., Wave 3) and adolescent psychological well-being was quite similar across gender, the negative relation between psychological well-being and early, stable awareness of same-sex attractions was more pronounced among males. (4) Relative to Bisexual and Homosexual-identified/SM, the understudied yet relatively sizable group of Heterosexual-identified/SM appeared to be at equal risk for deficits in psychological well-being.
---
What Does Sexual Orientation during Early Adulthood Mean for Adolescence?
Before discussing the findings, we will address some implications that the study's measure of sexual-minority status might have for the conclusions that can be drawn. The measure of sexual minority status was based on a measure of sexual orientation during early adulthood (Wave 3). Thus, the measure of sexual orientation was a static measure that failed to account for the fluidity of sexual identification over time (Diamond, 2006). Nonetheless, the measure was linked with indicators of psychological well-being that predated it by over six years. While one's declared sexual orientation during early adulthood may not be indicative of one's sexual orientation during adolescence, it is likely indicative of whether one dealt with same-sex sexuality during some point of adolescence. It is also likely indicative of the importance or primacy of that same-sex sexuality within one's overall sense of adolescent sexuality. For example, while both those who identified as homosexual and bisexual during early adulthood likely dealt with same-sex attractions during adolescence, for those who identified as homosexual during early adulthood those adolescent same-sex attractions may have been a more important or central component of their adolescent sexuality. Importantly, though a rough indication, the measures of same-sex attraction during adolescence help to narrow when during adolescence these individuals were first dealing with this same-sex sexuality. Thus, when paired together, the adolescent measures of same-sex sexuality and the early adulthood measure of sexual orientation provide among a large, national, longitudinal sample a meaningful account of sexuality as well as emerging awareness of that sexuality.
---
The Emergence of the Negative Relation between SM Status and Psychological Well-Being
The driving motivation for this study was to examine whether the negative relation between SM status and psychological well-being ( 1) is similar to that of other social statuses where differences are primarily in place by early adolescence; or (2) continues to emerge through the adolescent years when SM are thought to encounter unique developmental challenges. The findings suggest that the negative relation between SM status (based on the declaration of a sexual orientation that includes same-sex attractions during early adulthood) and psychological wellbeing is largely in place by early adolescence. This is evidenced by the fact that among both the young and old cohorts, and regardless of adolescent patterns of reported same-sex attractions, the discrepancies in psychological well-being were largest at the study's onset (when those among the young and old cohorts ranged between 12 and 15, and 16 and 19 respectively). Moreover, middle childhood and early adolescence appear to be more of a struggle for those who report early and stable same-sex attractions, since by early adolescence these individuals report the greatest deficits in psychological well-being relative to Heterosexual-Identified/non-SM.
Across adolescence the negative relation between SM status (again based on declared sexual orientation during early adulthood) and psychological well-being either remained stable or decreased. Among those who reported early and stable same-sex attractions, the negative relation between SM status and psychological well-being decreased across time. Importantly, among the young cohort (12-15 years of age at Wave 1), this pattern held true for both depressive affect and self-esteem. This finding suggests that for those who reported early, stable same-sex attractions, the negative relation between SM status and psychological well-being decreased across time, even among those who were early adolescents at the onset of the study. When ignoring same-sex attractions and focusing on early adulthood sexual orientation, the relation between SM status and psychological well-being was stable across time except for two instances: The first exception was among the whole sample, where the negative relation between Heterosexual-Identified/non-SM and Heterosexual-identified/SM increased across time. This pattern held only for depressive affect, and it was likely due to the fact that Heterosexual-Identified/SM were the group most likely to report unstable samesex attractions. These types of attractions, in turn, were associated with less of an increase in psychological well-being across time. The second exception was among the young cohort, where the negative relation between SM status and psychological well-being increased across time. Again, this pattern held only for depressive affect and only for those reporting unstable same-sex attractions. As noted above, this pattern was reversed when controlling for instability in reported same-sex attractions. Taken together, the negative relation between SM status and psychological well-being generally did not become more pronounced across adolescence. To the contrary, it either remained stable or even decreased among those who reported early and stable same-sex attractions.
---
Why Is the Negative Relation in Place by Early Adolescence?
Most of the challenges associated with being a sexual minority (e.g., dealing with homophobia and bullying, trying to find other SM peers, navigating romantic relationships, coming out), are confronted over the course of adolescence, not prior to it. The relation between declared sexual orientation during early adulthood and psychological well-being seems to manifest by early adolescence and does not increase thereafter, which speaks to the deleterious effects of feeling different from others during middle childhood and early adolescence. Though individuals must deal throughout the lifespan with being members of devalued groups and the sense of difference that accompanies those memberships, middle childhood is the first time individuals are confronted with this sense of difference. After all, it is not until middle childhood that youth are cognitively capable of internalizing this sense of difference as meaningful to their own personal sense of value (Harter, 2006). Consequently, they likely have not yet acquired the tools for dealing with this sense of difference. As a result those in middle childhood may be more likely to have their sense of well-being negatively influenced by that sense of difference.
Potentially compounding the deleterious effects of this sense of difference during middle childhood is the fact that unlike individuals of other stigmatized groups, SM often deal with this sense of difference in isolation, since those around them are predominantly, if not completely, of the sexual majority (D'Augelli & Hershberger, 1993). Contrast this to other youth of at-risk social status, such as females or members of racial minorities, who (1) are likely to have role models in the home or at school as well as peers and friends who share their status and (2) likely have parents or extended family members actively socializing them to deal with the challenges associated with their social status (Bowman & Howard, 1985;Cross, 1991;Thornton, 1997). Finally, the initial deficits may be larger among those SM reporting early and stable same-sex attractions because they are more likely to be dealing with this novel sense of difference at an even earlier age, an age at which they are even more likely to be isolated from others in the SM community (D'Augelli, 1996;Friedman et al., 2008).
---
Who "Recovers" and Why?
The negative relation between a declared sexual orientation during early adulthood that includes same-sex attractions and adolescent psychological well-being did decrease across adolescence, but only for a select group. The "recovery" or narrowing of psychological wellbeing deficits between SM and Heterosexual-Identified/non-SM was limited to those who reported early and stable same-sex attractions. In the case of self-esteem, the recovery was limited to the young cohort, those who ranged between 12 and 15 at the onset and between 18 and 23 at the conclusion of the study. Why the recovery was limited to those who reported early, stable same-sex attractions requires further examination, but we offer two possible explanations. First, SM who reported early, stable same-sex attractions had farther to recover. That is, relative to Heterosexual-Identified, non-SM, SM who reported early and stable same-sex attractions reported far lower initial levels of psychological well-being than did SM who did not report early and stable same-sex attractions. Second, SM who reported early and stable same-sex attractions may have benefited from having longer to adjust to their status and incorporate it into their sense of self (Floyd & Bakeman, 2006;Savin-Williams, 1995). Regardless of the reason, it seems that the earlier the awareness of same-sex attractions, the greater the initial deficit in psychological wellbeing, but also the steeper the recovery. This pattern of recovery among those reporting early, stable same-sex attraction is inconsistent with Friedman et al.'s (2008) findings that those progressing through gay-related developmental milestones at earlier ages tended to report lower functioning during adulthood. Respondents included in the Friedman et al. (2008) study were teenagers in the early to mid 1980s, whereas respondents in Add Health were teenagers in the mid to late 1990s. Perhaps historical increases in the acceptance of homosexuality (Savin- Williams. 2005) have contributed to reductions in the long-term consequences of an early awareness of same-sex sexuality.
In cases where there was a recovery, such recovery was generally not complete. SM still reported deficits in psychological well-being during early adulthood; those deficits were simply smaller than they were during early adolescence. With and without controls for instability in reported same-sex attractions, post-hoc comparisons of Wave 3 psychological well-being revealed that each of the three SM groups sill reported lower psychological wellbeing relative to the Heterosexual-Identified/non-SM group (results not tabled). The only exception was among Homosexual-identified/SM who reported early and stable same-sex attractions. This group reported Wave 3 levels of depressive affect that were equivalent to Heterosexual-Identified/non-SM.
---
Overall Lack of Gender Differences
The relation between sexual orientation during early adulthood (i.e., Wave 3) and adolescent psychological well-being was largely equivalent across gender. There was, however, a gender difference in the negative relation between early and stable reports of same-sex attractions and initial levels of psychological well-being, with the negative relation proving more pronounced among males. As noted in the Introduction, previous research has found that the negative relation between SM status and psychological well-being is more pronounced among males (Balsam et al., 2005;Cochran et al., 2003, Elze, 2002;Fergusson et al., 2005). This study's findings suggest a more nuanced pattern. Instead of the relation between sexual orientation and psychological well-being being more pronounced among males, it may be that an early awareness of one's same-sex attractions (and in turn one's sexual orientation) has a more detrimental impact on males than females. For the most part the relation between early awareness and growth of psychological wellbeing did not vary across gender, suggesting that these effects persist into early adulthood. Early awareness may be more problematic for males because sexuality as well as gender roles are generally more rigid among males (Diamond, 2006;Langlois & Downs, 1980;Richardson, Bernstein, & Hendrick, 1980), and because relative to females exhibiting same-sex sexuality, males exhibiting same-sex sexuality are more likely to be victimized by members of their own gender (Dunkle & Francis, 1990;Russell & Joyner, 2001).
---
Limitations
This study has several important limitations, the first being the limitations of our measure of sexual orientation as discussed earlier. A second limitation is that the sample sizes of the SM-sub groups were likely not sufficiently large to capture small to modest effects. This may be why the present study found few psychological well-being differences among the three SM groups. Finally, the earliest data available in Add Health are from early adolescence. Ideally, the data would extend back into middle childhood. Unfortunately preadolescent data on the SM community are difficult to obtain, in part because parents and guardians tend to be wary of researchers asking their pre-adolescent children questions pertaining to sexuality.
---
Conclusions and Next Steps
Sexual minorities or those exhibiting same-sex sexuality are a heterogeneous group who vary not only in sexual orientation but also in the developmental course they follow in terms of their awareness and acceptance of their sexual orientation. Among those exhibiting samesex sexuality, there also is heterogeneity in terms of developmental patterns of psychological wellbeing. Across adolescence, trajectories of psychological well-being converge, such that by early adulthood those exhibiting same-sex sexuality look more similar to both one another and those not exhibiting same-sex sexuality. In developmental science this phenomenon is termed equifinality (Bertalanffy, 1968) -multiple pathways to the same (or similar) end point. This pattern of findings highlights the important contributions that developmental theory and longitudinal data can make to our understanding of same-sex sexuality, sexual orientation, and psychological well-being.
More specifically, the pattern of results suggests that (1) the negative relation between SM status and psychological well-being is in place by early adolescence, and (2) the exact pathway or trajectory that one follows across adolescence is more a function of the timing of awareness of same sex attractions than it is of actual sexual orientation (as declared during early adulthood). These results raise the possibility that community resources and social support groups geared towards SM youth, now available in many high-schools, may benefit students in grade school and middle school as well. Finally, findings from this study are consistent with emerging research suggesting that relative to those who identify as a SM (i.e., bisexual or homosexual), Heterosexual-identified/SM, an understudied though sizable subgroup of the SM population who comprise about 8% of the overall population and about 80% of the SM population (Austin & Corliss, 2008;Remafedi, Resnick, Blum, & Harris, 1992), are at relatively equal risk (and in some cases greater risk than Homosexualidentified/SM) for deficits in psychological well-being. Future research should incorporate this subgroup when possible. Growth model examining psychological well-being across 3 waves. (1) = 6.17, p < .05 2 Δχ 2
(1) = 7.13, p < .01 3 Δχ 2
(1) = 5.17, p < .05
---
4
Δχ 2
(1) = 4.36, p < .05
---
Table 6
Among SM, the relation between reported instability in same-sex attractions and psychological well-being, by cohort and gender |
Rational drug use is a pivotal concept linked with morbidity and mortality. Immigration plays a significant role as a determinant affecting individuals' health-related attitudes, behaviors, and the pursuit of health services. Within this context, the study was initiated to assess the factors influencing health literacy and rational drug use among Syrian immigrants in Istanbul. A cross-sectional study was undertaken on 542 Syrian adults utilizing a three-part questionnaire encompassing sociodemographics, rational drug use, and the e-health literacy scale (eHEALS). With an average age of 39.19 ± 13.10 years, a majority of participants believed medications should solely be doctor-prescribed (97%) and opposed keeping antibiotics at home (93.7%). Yet, 62.5% thought excessive herbal medicine use was harmless. The mean eHEALS score stood at 20.57 ± 7.26, and factors like age, marital status, income, and duration of stay in Turkey influenced e-health literacy. Associations were seen between low e-health literacy and being female, being older, having a lower education level, and regular medication use. Syrian immigrants displayed proper knowledge concerning antibiotics yet exhibited gaps in their understanding of general drug usage, treatment adherence, and herbal medicines. Approximately 80.3% had limited health literacy, pointing to the need for targeted interventions for enhanced health and societal assimilation. | Introduction
Health is defined not only as the absence of disease and disability but also as a state of complete physical, mental and social well-being [1]. Psychosocial, economic and cultural factors and adequate utilization of health services are important in achieving and maintaining well-being [2]. The number of refugees and asylum seekers in the world is increasing. In 2022, 112.6 million people in the world were in the group defined as refugees or asylum seekers [3]. More than 3.5 million Syrian refugees live in Turkey. Health problems are more common in migrants [4]. In addition, the psychosocial and economic conditions of migrants and the language barriers they face negatively affect their health status as their search for health services remains limited [2]. Migration experience and cultural factors affect migrants' perception of drugs and antibiotics, and unconscious drug use is common among migrants [5,6].
Antibiotic resistance is one of the most important global public health threats worldwide. In particular, unconscious and improper use of antibiotics accelerates the development of resistance, which affects the success of treatment of infectious diseases and the duration of hospitalization, leading to an increase in health-related costs and mortality rates [7,8]. Rational use of drugs, especially antibiotics, is an important factor that prevents morbidity and mortality related to diseases [9]. Inadequate health literacy, self-medication and over-thecounter medication supply are important factors leading to widespread and uncontrolled use of drugs [10]. Interventions in Turkey have shown that educational activities are effective in improving the prescription, distribution and utilization of antibiotics [11][12][13].
Health literacy is defined as the knowledge and cognitive and social competence required for individuals to access, understand, evaluate and use health-related information to protect and improve their health, make decisions about their health status and improve their quality of life [14,15]. Health literacy is an important public health goal that also refers to the state and competence of individuals to meet complex health needs [16][17][18]. Challenging living conditions, cultural factors, language barriers, the complex and multidimensional structure of the health system, and social and economic disadvantages negatively affect migrants' search for and utilization of health services and their health in general [15]. The fact that information sources in health are diverse and information is dense has made the internet an important resource for accessing the right information for health. The internet is a useful and effective tool for accessing accurate health-related information and developing various skills to protect and improve health [19]. E-health literacy refers to an individual's ability to search, find, understand and evaluate health-related information from digital sources and use it for any health condition and/or problem [1,18,19]. Various studies with migrants have shown that their health literacy levels (65.1-67.8%) are inadequate and problematic [20,21]. Immigration is an important social determinant of health related to access to health services, utilization of health services, health perception and health literacy [20,22]. Health literacy is of critical importance in eliminating health inequalities and increasing the health levels in society [21].
The health literacy levels of individuals is an important and determining factor in rational drug use. Therefore, efforts to increase the health literacy level of immigrants will contribute greatly to increasing their knowledge about rational drug use and developing positive attitudes. In this study, it was aimed to determine the rational drug use and health literacy levels of Syrian adults living in a district of Istanbul and to examine the related factors.
---
Results
The mean age and SD value of the research group was 39.19 ± 13.10. In this study, 52.2% (283 people) of the participants were female and 47.8% (259 people) were male. It was determined that 46.5% of the immigrants in the research group were in the age group of 40 and above, 76.9% were married, 53.0% had high school and higher education, 80.4% had low income, 64.0% had been living in Turkey for 7 years or more, 71.4% lived in the same house with 5 or more people, 36.5% had chronic diseases, 60.9% used regular medication and 87.1% applied to a physician in the first place when they got sick. Data on the sociodemographic characteristics and disease-health status of the research group are shown in Table 1. In this study, 97.0% of the immigrants in the research group stated that medication should only be used when prescribed by a doctor. Furthermore, 93.7% stated that people should not keep antibiotics in their homes and then use them for other diseases. In total, 96.1% stated that physicians should prescribe antibiotics only when needed, and 75.8% stated that using enough medication, not too much, leads to recovery. Data on immigrants' attitudes and approaches to rational drug use are shown in Table 2. The eHEALS mean and SD value of the immigrants was 20.57 ± 7.26. It was determined that 80.3% of the immigrant group had limited health literacy, and 19.7% had adequate health literacy. Data on the eHealth Literacy Scale scores are shown in Table 3. In the study group, the mean ranking of eHEALS was significantly higher in immigrants who were in the 30-39 age group, married, had low income, had been living in Turkey for 7 years or more, did not have chronic diseases, did not use regular medication and had a monthly out-of-pocket health expenditure of less than 500 TL (p < 0.05). The comparison of sociodemographic and health-disease status with eHEALS is shown in Table 4.
Among the independent variables age group (p: 0.019 O.R: 2.83), gender (p: 0.048 O.R: 1.60), education level (p: 0.003 O.R: 3.96) and regular medication use (p: 0.000 O.R: 0.18) were found to contribute significantly to the model. The regression analysis of health literacy level according to sociodemographic characteristics is shown in Table 5.
---
Discussion
Socially and economically disadvantaged migrants are one of the groups that should be prioritized for public health interventions. It is important to determine the knowledge and behaviors of migrants regarding rational drug use. Health literacy is an important tool to increase the health level of individuals and society. The level of health literacy among migrants is a critically important determinant of rational drug use. In this study, we aimed to determine the rational drug use and e-health literacy levels of Syrian migrants and to evaluate the associated factors. It was found that 76.9% of the migrants in the study group were married, 53.0% had high school or higher education, 80.4% had low income, 60.9% used regular medication, 87.1% consulted a physician first when they got sick, and 80.3% had limited e-health literacy.
Approximately 3.5 million Syrian immigrants live in Turkey. The average age of immigrants is 22.32 years. Overall, 72.68% are women and children. In total, 30.23% are under the age of 10. Furthermore, 2.23% live in temporary shelter centers and 97.7% live in cities (Istanbul: 531.996, Gaziantep: 434.045, Şanlıurfa: 317.786), and the ratio of Syrian immigrants to Turkey's population is 3.73% [23]. The fact that the individuals in the research group first consult a physician when they get sick shows that they care about their health and seek to protect their health. In addition, the 87.1% preference for consulting a physician when ill suggests that immigrants do not experience difficulties in accessing health services in Turkey. In a meta-analysis, it was shown that migration-related factors, as well as social and economic conditions, may affect the health of immigrants [24].
In this study, 97% of the immigrants in the research group stated that medication should only be used when prescribed by a doctor. Furthermore, 93.7% stated that people should not keep antibiotics in their homes and then use them for other illnesses. Moreover, 96.1% stated that doctors should prescribe antibiotics only when needed. In total, 75.8% stated that using enough medication, not too much, leads to recovery. In this study, 51.0% stated that people can stop taking medication if they feel well during treatment. In total, 38% stated that people can stop taking medication if they feel well during treatment, and 38% stated that people can stop taking medication if they feel well. In this study, 0% stated that there is no harm in recommending medication to their relatives with similar complaints. Of the sample, 38.7% stated that herbs can be used instead of medication. In this study, 62.5% stated that using herbal medication as much as desired is not harmful to health. Furthermore, 36.7% stated that the form and duration of medication use cannot be determined by the individual. In this study, 61.0% stated that medications cannot be used to the same extent in every age group. In total, 68.1% stated that the duration of use of medications is not the same, and 67.4% stated that expensive medications are not more effective. The fact that the majority of the immigrants in the study group have low income and about half of them have an education level below high school suggests that their knowledge and perceptions about antibiotics are not sufficient. However, the results of the study show that the general knowledge and perceptions of immigrants about antibiotic use are better than expected [25]. In addition, it is seen that medication compliance is low, and it is common to recommend medication to relatives with similar symptoms. On the other hand, the presence of positive perceptions of herbal medicines and their use among Syrian immigrants may be related to sociocultural factors and past experiences.
Increasing antibiotic resistance is now considered a public health problem because it poses both a threat to human health and a serious economic cost [26,27]. In a study conducted with immigrants in the Netherlands, it was shown that immigrants had a more limited perception and knowledge of antibiotics compared to the native population [5]. Although physical and mental health problems are common in immigrants, their low socioeconomic status is associated with poor health outcomes [28]. There are studies showing that treatment compliance is low in immigrants [29]. Studies conducted in Turkey have shown that age, marital status, education level, income level, family structure, place of residence, employment status and health education status are associated with rational drug use [30][31][32]. In a different study, it was shown that giving importance to health and seeking healthy life behaviors positively affected the attitude toward rational drug use [33]. In another study conducted in Turkey, sociodemographic characteristics such as age, gender, employment status and education level were found to be associated with the level of rational drug use knowledge of Syrian immigrants [34]. In a meta-analysis, it was shown that factors such as previous similar symptoms and antibiotic experiences, perceived low severity of the disease, intention to recover quickly, difficulty in accessing a physician or health facility, lack of trust, low cost and ease of use affect/increase self-medication [35]. In another meta-analysis, a positive relationship was found between health literacy and medication adherence [36].
Today, the internet has become a frequently used source of health information because of its ease of access and use, low cost and ubiquity. People frequently use the internet for disease prevention, healthy living behaviors and general disease conditions [37]. However, there may be some difficulties for users to access useful and quality health and medical information online [38]. It is inevitable that individuals with low income and low levels of e-health literacy, such as immigrants, will experience difficulties in this situation. As a matter of fact, the eHEALS median (min-max) value of the migrants in our research group was found to be 21 . The e-health literacy level of 80.3% of the immigrant group was found to be limited (insufficient + problematic), and 19.7% was found to be sufficient.
In a recent study conducted in the same city, it was also observed that immigrant health literacy levels were insufficient [39]. Immigrants in the study group with low income levels may have limited internet access and use. In addition, low education level and sociocultural factors in the study group may have affected immigrants' access to accurate and reliable information about health on the internet and their ability to understand and use this information. In addition, immigrants' health perceptions, chronic disease status and health-information-seeking behaviors or habits may affect their e-health literacy levels. The level of education and health literacy of society affects the health status of individuals and their attitudes and perceptions towards medicines [40]. On the other hand, in disadvantaged groups such as immigrants and the elderly, technological applications can make a significant contribution to individuals' access to reliable health information and making the right health decisions [41].
It is important that the health services of immigrant-hosting countries are appropriate to the personal needs, living conditions, sociocultural characteristics and competence levels of immigrants. Improving health literacy plays a critical role at this point. In Turkey, Syrian immigrants can access health services free of charge [42]. In addition, health services are provided to these immigrants by Syrian healthcare professionals through reinforced immigrant health centers. In these centers, where specialist physicians in various branches work, preventive health services (immunization, family planning, education, screening programs), outpatient diagnostic and therapeutic health services are provided without language barriers. This situation positively affects Syrian immigrants' access to and use of health services and contributes to the protection and improvement of their health. It should not be overlooked that it also contributes positively to their health literacy status. A meta-analysis has shown that the concept of health literacy is very important for protecting and improving the health of individuals and is an important determinant of the health level of society [43]. Basic health literacy facilitates individuals' access to health services, enables reducing health inequalities and contributes to the development of health services policies at the societal level [44].
In the research group, the mean ranks of e-health literacy were significantly higher in the age group of 30-39. Married, low income, living in Turkey for 7 years or more, not having chronic diseases, not using regular medication, not doing anything for a while when they get sick and using medication according to their own experience, and having a monthly out-of-pocket health expenditure of less than 500 TL (p < 0.05). The presence of social support within the family among married individuals in the research group may have contributed to the well-being and better health of individuals. In a meta-analysis, the positive effect of education, income level and the presence of social support on individuals' health literacy was shown [45]. Immigrants who live in Turkey for longer periods of time overcome the language barrier to a large extent. Since they are in contact with the community, their children or siblings go to school, and their spouses or family members work, there is always someone in the family who speaks Turkish. This makes it easier for Syrian immigrants who live in Turkey for longer periods of time to follow official procedures and access and use health services. As a matter of fact, Syrian immigrants who are registered in the city where they reside in Turkey can access public health services free of charge thanks to their temporary protection status, and they can also get their medicines free of charge or by paying co-payments. This is supported by the fact that the vast majority (84.1%) of immigrants in the study had a small out-of-pocket health expenditure (<500 TL). The high level of eHealth literacy of immigrants who do not have chronic diseases and do not use regular medication may be related to the fact that they use digital resources more intensively in accessing reliable and accurate information about protecting their health and adopting healthy life behaviors because they care more about their health. Systematic reviews have shown that education level is associated with eHealth literacy [46]. It is inevitable for individuals with low health literacy to skip preventive health services, treatment compliance, chronic disease management and, more generally, have poor health outcomes [47]. Factors such as cultural beliefs about health and illness, language problems and socioeconomic status affect immigrants' communication with healthcare providers and their understanding and compliance with medical instructions [48].
In the logistic regression analysis established to predict the level of eHealth literacy according to sociodemographic characteristics, model fit was found to be good. Since it is thought that the effect of some independent variables would be more significant within the scope of the research, these variables (age, gender, education level, chronic disease, continuous medication use, etc.) were included in the regression analysis. In addition, in order to obtain a stronger prediction model with fewer variables, regression analysis was performed only with some independent variables. Among the independent variables, age group (p: 0.019 O.R: 2.83), gender (p: 0.048 O.R: 1.60), education level (p: 0.003 O.R: 3.96) and regular medication use (p: 0.000 O.R: 0.18) were found to contribute significantly to the model. Female gender, advanced age, low education level and regular medication use decrease the level of health literacy. In the traditional sociocultural structure of Syrian immigrants, it is mostly men who have more contact with the outside social environment, attend school and have a job. For this reason, immigrant women are less likely to access the internet as they lack both language learning and economic independence. This situation also contributes to the limited ability of immigrant women to search, understand and use health-related information on the internet. Immigrants with older age and lower educational attainment have more problems accessing the internet and understanding and evaluating accurate and reliable health-related information on the internet. In a study conducted with Syrian immigrants in Sweden, it was shown that immigrants with low educational levels had limited health literacy [2]. Providing education to individuals with chronic diseases positively affected/increased rational drug use and health literacy [49]. Prolonged length of stay, positive perception of social status and educational level of immigrants in the country of migration affect the level of health literacy [20]. Immigrants are at high risk of having limited health literacy. This plays an important role in achieving better health for themselves and their families [50]. In another meta-analysis, it was shown that providing accessible and reliable health information on the internet or in the media in simple and understandable language would contribute to improving individuals' health literacy levels [51].
---
Strengths and Limitations of the Research
Given Turkey's significant standing in global migration statistics, research conducted on migrants within the country undeniably offers critical contributions to the literature. The present study was meticulously executed in an area densely populated by migrants, employing Arabic-speaking interpreters. This approach ensured a direct engagement with the migrants, allowing for a more authentic representation of their voices and experiences. Specifically, by focusing on this distinct and often hard-to-reach migrant group, our research aims to fill a palpable gap in the literature by centering on their subjective evaluations.
However, it is imperative to underscore certain limitations of our study. Conducting the research in a singular region may impose constraints on the generalizability of the findings to the broader migrant population in Turkey. Additionally, the involvement of interpreters, while invaluable, could potentially raise concerns about the accuracy and impartiality of the translated responses from the migrants. Moreover, as the study predominantly focuses on Arabic-speaking migrants, it does not encompass insights from migrants of other linguistic backgrounds.
---
Materials and Methods
---
Research Type and Research Population
A cross-sectional study was conducted. The population of the study consisted of Syrian immigrants over the age of 18 who applied to Sultanbeyli Strengthened Migrant Health Center. Sultanbeyli is the district with a total population of 358,201 and has the lowest socioeconomic level in Istanbul. Around 22,000 Syrian immigrants live in the district.
Strengthened Migrant Health Centers are organizations that provide primary health care services to Syrian refugees who have settled in Turkey. These centers are staffed by specialist physicians, general practitioners, dentists, allied health personnel, psychologists and social workers. The centers are mostly staffed by Syrian healthcare professionals. Therefore, there is no language barrier/problem [52]. There are 8 of these centers in Istanbul, 1 of which is located in Sultanbeyli district. All immigrants over the age of 18 who volunteered to participate in the study were included in the study without sampling.
---
Measurement Tools
For the study, a questionnaire was prepared based on the literature and consisted of three sections. The first part of the questionnaire consisted of statements evaluating sociodemographic characteristics and health status. The second section includes statements on rational drug use prepared according to the guidelines and guidelines in the literature. The third section includes the E-Health Literacy Scale Arabic form. The survey was conducted face-to-face with immigrants through Arabic-speaking interpreters.
---
Rational Drug and Antibiotic Use Survey
The rational drug use questionnaire was prepared based on the World Health Organization's (WHO) public awareness survey on antibiotic resistance conducted in 6 different WHO regions in 2015, the rational drug use scale whose validity and reliability studies have been conducted in Turkey, and other sources in the literature. This section consists of statements aiming to obtain information about the rational drug use status and attitudes of immigrants [7,30,53]. The statements in the section are in a 5-point Likert type and consist of a total of 13 items. Each item has a response scale ranging from "Strongly Disagree" to "Strongly Agree". The section also includes negative statements. The relevant statements were compiled in order to learn the level of knowledge of the participants about the use of medicines and antibiotics and to evaluate their attitudes. The items in the section provide a subjective assessment of the rational use of medicines and antibiotics by immigrants [7,30].
---
E-Health Literacy Scale (eHEALS)
The eHEALS was developed by Norman and Skinner in 2006 and aims to measure literacy skills useful in assessing the effects of strategies for delivering online information and applications [1,18]. The eHEALS consists of 8 items, and participants are asked to rate each item on a 5-point Likert scale (strongly disagree, disagree, undecided, agree, or strongly agree). Total scores range from 8 to 40, with higher scores indicating higher self-perceived eHealth [1,54]. eHEALS scores are divided into thresholds of inadequate (8-20 points), problematic (21-26 points) and adequate (27-40 points). The eHEALS Arabic validity and reliability study was conducted by Wangdahl et al. [19]. However, since the use of 3 thresholds in the Arabic eHEALS threatens the validity and reliability of the scale, the scale was divided into two: limited (insufficient + problematic = 8-26 points) and sufficient (27-40 points). In our study, a 2-point version of the scale was used to identify those with eHealth literacy problems [1,19]. eHEALS psychometric tests show that it is a valid and reliable instrument and has also been translated, adapted and validated in Arabic [2,54].
---
Statistical Analysis
For statistical analysis, the eHEALS was accepted as the dependent variable. Statistical Package for the Social Sciences (SPSS) Program version 26.0 was used for statistical analysis. Continuous variables were expressed as mean ± standard deviation (SD) and median. Categorical variables were expressed as numbers and percentages (%). Kolmogorov-Smirnov and Shapiro-Wilk tests were performed for normality analysis of the data and Skewness and Kurtosis values of the scales with p < 0.05 were analyzed. It was accepted that the values with Skewness and Kurtosis values between ±1.5 were normally distributed, and the values not between ±1.5 were not normally distributed.
Since the data in the research group did not show normal distribution, the Mann-Whitney U test and Kruskal-Wallis test were used in data analysis. Chi-square and Fisher's exact tests were used to compare categorical variables between groups. Correlation (Spearman) analysis was used for the relationship between continuous variables. Logistic regression analysis was performed to predict the level of eHealth literacy according to the independent variables, model fits were evaluated, and the variables that contributed significantly to the model were examined. In statistical analyses, p < 0.05 was considered significant.
---
Ethics Committee Permission
Ethics committee permission was obtained from the Istanbul Medipol University Non-Interventional Clinical Research Ethics Committee on 24 November 2022 with decision number 991. The individuals included in the study were asked to participate in the study after being informed about the research and permissions. A questionnaire was administered to individuals who agreed to participate in the study.
---
Conclusions
It was observed that Syrian immigrants have very good knowledge and attitudes about antibiotic supply and use. However, it was observed that their knowledge and attitudes regarding drug use, treatment compliance and herbal medicines were not sufficient. The eHealth literacy level of 80.3% of the immigrants in the research group was found to be limited (insufficient + problematic) and 19.7% was found to be sufficient. The eHEALS level of Syrian immigrants was found to be associated with being married, having a low income level, living in Turkey for a longer period of time, chronic disease, regular medication use and monthly out-of-pocket health expenditure. In addition, advanced age, low education level, female gender and regular medication use affected the low level of eHealth literacy.
Interventions targeting disadvantaged groups such as immigrants are very important in preventing infectious diseases, reducing treatment costs and monitoring chronic diseases. At this stage, health literacy interventions play a critical role. In today's digital environment. eHealth literacy interventions for immigrants will help them access reliable health information online and make the right decisions about their health. In addition, health promotion interventions such as eHealth literacy will enable immigrants to care about their health and improve their quality of life.
The success of health policies will be enhanced if countries with a high concentration of immigrants plan and implement health services by taking into account immigrants' needs, learning competencies, language problems, living conditions and sociocultural characteristics. eHealth literacy interventions for immigrants will facilitate the provision of health services and contribute to the safe access of immigrants to health services.
---
Data Availability Statement: All datasets and analyses used throughout the study are available from the corresponding author upon reasonable request.
---
Institutional Review Board Statement: Prior to initiating the research, ethical approval was secured from the Ethics Committee of Istanbul Medipol University on 24 November 2022 with decision number 991. All individuals involved in the study were comprehensively informed about the research aims and procedures and were subsequently invited to participate. Our research was conducted in full accordance with the Declaration of Helsinki, and informed consent was obtained from every participant.
Informed Consent Statement: Informed consent was obtained from all participants involved in the study.
---
Conflicts of Interest:
The authors declare no conflict of interest. |
It is no novelty to couple together risk and responsibility as scientific themes for joint reflection. 1 What we have attempted to do in this Special Issue is primarily to investigate these two issues as categories, that is, as philosophical concepts that require clear-cut definitions as a starting point for examining their intertwinement and any ultimate shifts in their meanings under new circumstances such as the emergence of 'technological risks' or 'global challenges'. This intent has motivated a shift in the main role among disciplines: here political philosophy, ethics and philosophy of science are given the leading role in debating risk, whereas elsewhere this is given to decision theory, sociology (of risk) and political science, the latter represented in this issue by just one paper, that of Turnheim and Tezcan. This shift is, however, not just brought about by the disciplinary affiliation of this guest editor nor by a chance mix of authors. Its rationale lies rather in re ipsa, in the growing request for philosophical elaboration on themes that used to be confined to social or even hard sciences. There are two reasons for this: first, the amount of possible harm contained in technological development as a whole (global warming) or in its most lethal chapter (nuclear weapons) raises ultimate problems of life and death, well-being and extreme misery for the whole of humankind that can typically only be grasped by philosophy (ethics, metaphysics) or theology. Second comes a need currently emerging in public discourse about global risks or threats: as long as the reasoning about them took place in epistemic communities (for example of climatologists or public health specialists) or ecological advocacy groups, attitudes of scepticism or confusion rarely arose, and nearly every partner to the conversation was F. Cerutti (&) |
convinced of the need to raise full awareness of the risks and to take action. This is no longer the case since the entry of an issue such as global warming in the public discourse at national and world-wide level, where a lack of clarity, dissenting opinions or biases generated by special interests (the coal industry in the USA being a case in point) are widespread. Also, while more cost and benefit calculations are now being made publicly about a serious emissions cut (either through a cap and trade system or a carbon tax), what the public seems to be sensitive to is not just economic expediency, but also the fate of the earth and future generations. Hence, a more and more complex public discourse about global threats implies a role for philosophy, as does the necessity to motivate views that used to be self-evident to narrow and specialized audiences.
Having said something about the intention of this Special Issue, I do not think comments or criticism of the chapters are what is required from the editor-at least in this case. I have also refrained from writing a Conclusion because I deem it more fruitful for the reader to look at the plurality of positions, vocabularies and research interests expressed by the authors rather than come to a somehow unifying conclusion. Everyone will pick out the stimuli emanating from this plurality that is most relevant to themselves.
What I will try to do in the following is rather to signal five foci in the articles that compose this issue:
(1) What links are there between risk and responsibility? (2) What novelties are highlighted by the authors? (3) Definitions and the history of 'risk'. (4) Why act responsibly? (5) How can we act thus?
(1) Not all of the authors problematize the linkage between risk and responsibility, and those who do so give different versions of it. Pellizzoni sticks to the classical notion of responsibility as imputability and sees responsibility as structurally coupled with risk taking. Pulcini looks at the emergence of global risks such as nuclear war as the factor that redefines responsibility as an attitude towards others rather than the imputability of a certain type of behaviour to an actor. In my contribution only risks that can be managed by humans are seen as capable of being a source of responsibility, which is regarded as feeling responsible for something and towards somebody; this is not the only limit I set to the scope and meaning of 'risk'. (2) Many authors converge in underlining that the magnitude of the new risks, particularly in the environmental realm, and more precisely the new magnitude (disruption of world society or even civilization) of the eventual loss, creates new settings for reflection on responsibility. Jamieson sees the difficulties of interpreting climate change as a problem of individual moral responsibility, but concludes that particularly with an eye to this problem it is the very 'everyday understandings' of moral responsibility that should be changed.
That the new situation requires a redefinition of our moral categories is a position largely shared by Pulcini, as we have just seen.
On other terrains, other authors point out the effects of these new elements. Ferretti argues that in the case of risks of a possibly catastrophic dimension and affecting different generations, the compensation model based on tort law can no longer apply. Pellizzoni points at the epistemological novelty of risks which, due to their very radical nature, cannot be assessed by the usual scientific procedure of trial and error.
(3) Most authors adhere to the classical definition of risk as what combines the possibility of harm or loss with the probability that it will actually happen. Many authors also cite the distinction between risk and uncertainty, but only in my contribution is this distinction taken as narrowly as to exclude extreme events such as nuclear war or catastrophic climate change from the category of risk. Pellizzoni, on the contrary, regards uncertainty as a special case of risk.
With regard to climate change, Jamieson distinguishes between the risks represented by a large, but still linear change and an even larger, non-linear change.
As uncertainties in forecasting future phenomena are sometimes taken as grounds for not taking action to contain them, it is particularly important that Dalla Chiara makes evident how much 'uncertainty' contemporary science contains as a fundamental and, so to speak, physiological category. This is done by reference to quantum mechanics along with Heisenberg's uncertainty principle and the emergence of 'fuzzy thinking' in logics.
Further controversy is met in the historical assessment of the risk category. All those who tackle this issue converge in seeing risk as feature of modernity, but Ferretti and Cerutti underline the progressive side of risk taking as a widening of choice and therefore of liberty, while Pellizzoni tends to view this stance as a manifestation of neo-liberal ideology.
(4) As for the reasons justifying the acceptance of responsibility for major risks or threats impending on humankind, both Jamieson and Cerutti concur in maintaining the insufficiency of what Jamieson calls 'prudential responsibility', based on the self-interest of the present generations. While Jamieson resorts to respect for nature as the ultimate reason for assuming responsibility for climate change, in my contribution the argument is based on an obligation towards human generations of the distant future. Pulcini's reasons are teleological rather than normative: acting out of responsibility for 'global risks', as mentioned sub 1), is the only way out of the 'pathologies of the global age'. (5) There are a wealth of proposals as to how to implement our responsibility towards the risks and threats that impend over human life. Some authors put at the centre the redressing of the unjust distribution of risk and harm, whose geography however-Jamieson warns-is shaped by the social divide (poverty, high levels of inequality, poor public services) within rather than between countries. Perhaps surprisingly, more participation is not seen as a significant factor in combating injustice: Ferretti claims the superiority of distributive justice itself, while Pellizzoni argues that the problem is the need to democratize society rather than science and knowledge. Outside the justice paradigm, Pulcini points at the importance of new sentiments capable of letting us feel the new severity of the human condition under global risks, while I argue that the survival of humankind is the primary problem in the context of which considerations of fairness make sense.
The Special Issue closes with Turnheim and Tezcan's analysis of a case in point, that is, the functioning of the UN Framework Convention on Climate Change seen as an instance of complex governance defined by the relationship with science, an inbuilt reflexivity and forms of governmentality.
Obviously the papers in this issue contain more than I can possibly summarize in this introduction, whose goal is to give a sense of the variety of positions and approaches. As for the latter, I wish to stress the attempt made here to give a joint voice to philosophical positions as different as the normative ethics relating to the theory of justice and the moral and political philosophy concerned with the fate of modernity. This multiplicity is intended to provide a variety of stimuli to those who are open to them, not to generate an unlikely synthesis. |
Background: Australia, Canada, and New Zealand are all developed nations that are home to Indigenous populations which have historically faced poorer outcomes than their non-Indigenous counterparts on a range of health, social, and economic measures. The past several decades have seen major efforts made to close gaps in health and social determinants of health for Indigenous persons. We ask whether relative progress toward these goals has been achieved. Methods: We used census data for each country to compare outcomes for the cohort aged 25-29 years at each census year 1981-2006 in the domains of education, employment, and income. Results: The percentage-point gaps between Indigenous and non-Indigenous persons holding a bachelor degree or higher qualification ranged from 6.6% (New Zealand) to 10.9% (Canada) in 1981, and grew wider over the period to range from 19.5% (New Zealand) to 25.2% (Australia) in 2006. The unemployment rate gap ranged from 5.4% (Canada) to 16.9% (Australia) in 1981, and fluctuated over the period to range from 6.6% (Canada) to 11.0% (Australia) in 2006. Median Indigenous income as a proportion of non-Indigenous median income (whereby parity = 100%) ranged from 77.2% (New Zealand) to 45.2% (Australia) in 1981, and improved slightly over the period to range from 80.9% (Canada) to 54.4% (Australia) in 2006. Conclusions: Australia, Canada, and New Zealand represent nations with some of the highest levels of human development in the world. Relative to their non-Indigenous populations, their Indigenous populations were almost as disadvantaged in 2006 as they were in 1981 in the employment and income domains, and more disadvantaged in the education domain. New approaches for closing gaps in social determinants of health are required if progress on achieving equity is to improve. | Background
Indigenous peoples around the world experience higher rates of poor health, poverty, poor diet, inadequate housing and other social and health problems relative to non-Indigenous people. These disparities are found in nearly all countries with Indigenous populations, including some of the wealthiest nations in the Organisation for Economic Co-operation and Development (OECD) [1,2]. The narrowing of these gaps in health and socio-economic outcomes has been a focus of successive governments in these nations since at least the 1970s.
Understanding the complex historical, political and socio-economic factors that have led to the present situation has also been a key focus for medical and social sciences across the past four decades [1,2]. High-profile reviews published by the United Nations and others in recent years have documented the common factors underlying the continuation of health and social inequalities experienced by Indigenous populations across the globe, including systematic loss of culture and language, dispossession from traditional territories, and economic and social marginalization [2][3][4].
Indigenous inequality is a global health problem, but it is perhaps most surprising to witness its continuation in some of the world's most wealthy countries. A commonly used barometer for the comparison of health and socioeconomic development across countries is the United Nations' Human Development Index (HDI). Australia, Canada and New Zealand regularly place among the top 10 countries in the world on this annual measure, which combines education, income and life expectancy [5]. A previous study showed that these countries' Indigenous populations would rank far lower on the HDI league table than their total populations, revealing the relative disadvantage of Indigenous peoples [6]. Each of these countries has since demonstrated a commitment to improving outcomes for Indigenous peoples by signing the United Nations Declaration on the Rights of Indigenous Peoples [7], which specifically articulates Indigenous peoples' rights to "improvement of their economic and social conditions".
The work of Marmot and others has demonstrated the existence of marked social gradients in health among the populations of wealthy nations [8]. In some cases the poorest groups in these societies have health and life-expectancy profiles similar to those living in developing nations. Much of this observed discrepancy in health outcomes has been attributed to so-called "social determinants of health", which we might define as those non-health indicators of life outcomes which influence an individual's health status across their life course. These can be socio-economic indicators such as education, employment status (including job type for those who are employed), income and wealth, property rights, justice system contacts, and social connections and supports, which impact a person's ability to: obtain preventive health knowledge; apply that knowledge to their own life; and access appropriate health services when treatment is required for a given condition.
Marmot's observations around health outcomes for the poor in relation to the unequal distribution of resources in wealthy societies [8] have been placed into global Indigenous perspective by the work of Gracey, King and Smith [3,4]. Where Marmot suggests that improving education, employment and income among disadvantaged segments of society will have positive implications for health and general wellbeing [8], Gracey, King and Smith [3,4] point out that the health of Indigenous populations may also be affected by additional and unique factors, such as cultural security, connection to lands, language, and culturally defined notions of health and wellbeing [3,4].
Our focus is on Australia, Canada, and New Zealand. In 2006 the combined Indigenous populations for these developed nations was 2.7 million persons, from a total population of about 55 million people [9][10][11]. These countries share a common pattern of mainly British colonization over their Indigenous populations; however important factors have uniquely shaped Indigenous-settler relations in each. These include: geography; the relative size of Indigenous and settler populations; and, in Canada, the influence of other colonial powers [12]. Despite these differences, persistent social, economic, and health disparities between Indigenous and non-Indigenous populations exist in all three countries.
Drawing on these perspectives, our study documents the relative progress made toward reaching equitable levels of socio-economic development among Indigenous citizens in Australia, Canada and New Zealand from 1981-2006, and looks at prospects for closing gaps in social determinants of health with non-Indigenous citizens in the coming 25 years. We focus on relative inequality in the human development domains of education, employment, and income, specifically among those aged 25 to 29 years. This is the age range by which most higher education has been completed, allowing us to more clearly see changes in educational attainment patterns. It is also the age by which a number of other important transitions have generally taken place, such as leaving the parental home, the transition from school to work, and the commencement of family formation, which have life-long implications for wellbeing and intergenerational transfers of human capability. Indeed, "closing the gap" likely requires particular attention to young people, and to the quality of these transitions. We believe this is the first time one study has brought together long-term data comparing these social determinants of health in the Indigenous populations of these three nations.
---
Methods
---
Study design
This study reports results from an analysis of census data for Australia, Canada, and New Zealand. Census data were used in preference to other data sources because of: the long time series available; consistency in measurement of questions and concepts over time; the availability of data for the same time points for each country; the absence of sample size issues; and the coverage of both Indigenous and non-Indigenous populations. Any effects on Indigenous wellbeing of the recent global slowdown in economic activity are not represented, as 2006 is the most recent census year for which these data are available for comparison between all three countries.
We measured progress of Indigenous persons aged 25-29 years relative to non-Indigenous persons aged 25-29 years over a 25 year period and across three human development domains: education; employment; and income. Information to support this investigation was obtained from the national statistics agencies of Australia, Canada and New Zealand for the census years 1981, 1986, 1991, 1996, 2001, and 2006, covering each domain of interest [13][14][15].
---
Indigenous populations
Australia, Canada and New Zealand have all included questions in their population censuses to identify their Indigenous populations in each of the years 1981-2006. This has allowed the data for each of the domains examined in the analyses to be disaggregated by Indigenous status for the three countries.
The term "Indigenous persons" is used interchangeably to refer to Australian Aboriginal and Torres Strait Islander peoples, Canadian Aboriginal peoples (including First Nations, Inuit and Métis), and New Zealand Māori.
---
Data access and permissions
Census data for Australia, Canada, and New Zealand were available to the authors via custom tabulations from their respective national statistical agencies. No special permissions or ethics committee approvals were required for this study as all research was undertaken using publically available de-identified and confidentialised data, ensuring the anonymity of all persons represented by the data.
---
Measures
---
Education domain
Our measure was the proportion of Indigenous and non-Indigenous persons aged 25-29 years who had achieved a highest qualification of 'bachelor degree or above' in each of the census years 1981-2006 for each country.
'Bachelor degree or above' includes bachelor degrees, plus all postgraduate degrees, graduate diplomas and graduate certificates that require a completed bachelor degree as a pre-requisite for enrollment. While there are some differences in the way overall education statistics have been classified on the census forms of the three countries, there is very good comparability across all three countries for the classification 'bachelor degree or above' used by our study.
---
Australia
The Australian Bureau of Statistics (ABS) provided us with a set of customized data tables from the Census of Population and Housing showing 'highest level of qualification' by Indigenous status for persons aged 25-29 years, calculated for all census years 1981 to 2006. We report data from these tables on persons with a classification of 'bachelor degree or above'.
'Highest level of qualification' is derived from responses to census questions on the highest year of school completed and level of highest non-school qualification. The data excluded overseas visitors for all years [13].
---
Canada
Statistics Canada provided us with a set of customized data tables from the Census of Population showing 'highest level of schooling' by Aboriginal designation, for persons aged 25-29 years, calculated for all census years 1981-2001, and 'highest degree, certificate or diploma' for 2006 [14].
The data refer to the highest grade or year of elementary or secondary school attended, or the highest year of university or other non-university education completed. University education is considered to be above other non-university education. Also, the attainment of a degree, certificate or diploma is considered to be at a higher level than years completed or attended without an educational qualification. From this data we were able to calculate the proportion of Aboriginal and non-Aboriginal persons aged 25-29 years who had achieved a highest qualification of 'bachelor degree or above' in each of the census years.
---
New Zealand
Statistics New Zealand provided us with a set of customized data tables from the Census of Population and Dwellings showing 'highest qualification' by Māori ethnic group for persons aged 25-29 years, calculated for all census years 1981 to 2006 [15].
'Highest qualification' is derived for people aged 15 years and over, and combines responses to census questions on the highest secondary school qualification and post-school qualification, to derive a single highest qualification. The output categories prioritize post school qualifications over any qualification received at school. From this data we were able to calculate the proportion of Māori and non-Māori persons aged 25-29 years who had achieved a highest qualification of 'bachelor degree or above' in each of the census years.
---
Labour force domain
Our measure was a census derived unemployment rate for each country. The census labour force variables were consistent for all three countries, with classifications of 'employed' , 'unemployed' and 'not in the labour force' provided via custom tables from the statistical agencies of each country [13][14][15].
A person is said to be 'unemployed' if they had no job in the past week but were actively looking for work. A person is regarded as being 'in the labour force' if they are currently employed or actively looking for work. Persons in neither category are regarded as being 'not in the labour force' and are not included in unemployment calculations.
Unemployment rates were produced for the Indigenous and non-Indigenous populations for each country using the following calculation: Unemployment rate ¼ unemployed persons persons in the labour force  100
Additionally there had been little change in the categories of labour force status at the broad level across the six censuses for any of the countries, making this variable suitable for analysis across multiple time points.
---
Income domain
Our measure was median Indigenous personal income as a proportion of median non-Indigenous personal income in each of the census years for each country.
The information on annual personal median incomes for persons aged 25-29 years for each census year for Australia, Canada, and New Zealand was sourced from the statistical agency of each country [13][14][15].
---
Results
For the indicator 'the proportion of those with a bachelor degree or higher qualification' the gaps in all countries were wide, and in fact grew wider over the period (Figure 1). For example, in Australia for those aged 25 to 29 years the gap rose from 8 to 25 percentage points between 1981 and 2006. Australia clearly fared the worst of the three countries in terms of the increase in the gap for this indicator, but even the best performer, Canada, showed a gap of 17.6 percentage points by 2006. This is not to say that educational outcomes for Indigenous people have worsened. The data for all three countries clearly indicate absolute gains in the proportion of Indigenous people with bachelor degree or higher qualifications (Table 1). However, in relative terms Indigenous people were increasingly behind the non-Indigenous populations on this measure.
While Indigenous people had consistently higher unemployment, there was fluctuation in the unemployment rate gap over the period 1981 to 2006 for all three countries (Figure 2). By 2006, both Australia and Canada showed a narrower gap than that observed in 1981, while the gap for New Zealand had widened slightly. However, Australia maintained the widest unemployment rate gap of the three countries over the entire period, despite the gap reducing from 16.9 to 11.0 percentage points. Canada finished the period with the narrowest gap (6.6 percentage points). Median Indigenous income as a proportion of non-Indigenous median income (whereby parity = 100%) ranged from 77.2% (New Zealand) to 45.2% (Australia) in 1981, and improved slightly over the period to range from 80.9% (Canada) to 54.4% (Australia) in 2006. Overall, the gap remained steady for Australia, while for Canada and New Zealand there was some fluctuation over the period (Figure 3). Again, Australia fared the worst, with Indigenous median annual income barely reaching above half that of non-Indigenous people across the reference period, while Canada and New Zealand had made some improvements by 2006.
---
Discussion
Wealthy developed nations with a colonial past, such as Australia, Canada, and New Zealand, have typically underresourced the human development of their Indigenous populations for much of their post-colonial histories. Impact has been felt across most aspects of Indigenous life, including health, education, participation in the economy, legal rights to traditional lands and resources, cultural security, and wider issues of social inclusion. Though government mandated reparations have been in place since at least the 1970s, long standing inequality has left the Indigenous peoples of these countries behind their non-Indigenous counterparts on indicators of health, wealth, social justice, and general wellbeing [2]. This research comparing social determinants of health for Australia, Canada, and New Zealand, suggests that such inequalities have persisted-in some cases barely improving across Employment Projects -CDEP), and doing so has meant being recorded as "employed" on official labor force statistics, reducing Indigenous unemployment and potentially distorting the true gap [16].
25 years, with Australia the worst performer overalldespite concerted efforts by governments to close gaps in outcomes for Indigenous people in recent decades. These countries are now challenged with finding new approaches to solving this social inequality issue, if health and socio-economic conditions for Indigenous people are to even approach parity with non-Indigenous persons within a generation. The social determinants of health observed in this study covered educational attainment, labour force activity and income. We specifically examined the gaps between Indigenous and non-Indigenous people using the proportion with a bachelor's degree or higher, unemployment rates, and median annual income. There are other indicators of "wellbeing" upon which these populations could be compared. However, people's connection to the labour force, higher formal educational attainment and income are critical aspects of participation and inclusion in these societies, and key social determinants of health. In the terms of Nobel laureate and HDI author Amartya Sen, being engaged in work and having sufficient income represent "functionings" that help one to make meaningful life choices in order to realize "capabilities" [17]. In the context of advanced economies, these capabilities have direct implications for wellbeing.
While a persistent gap exists between Indigenous and non-Indigenous outcomes for these indicators, we hypothesized that this gap should have narrowed over time. Our results show that in absolute terms there was some improvement on all three indicators for all three countries, but no consistent narrowing of the relative gaps for any country (Figures 1, 2 and3). As Table 1 shows, reductions in the indicator gaps for some time periods are due to fluctuation in the measures for non-Indigenous people, as opposed to improvements for Indigenous people.
The increasing gap in educational attainment is largely due to rapid increases in the proportion of university qualified young people in the non-Indigenous populations of all three countries (Table 1). This expansion in higher education is closely linked to compositional shifts in developed economies away from manufacturing and into knowledge-based service industries, and each of these countries has experienced periods of macro-economic restructuring towards a more knowledge based economy [18]. As relatively fewer Indigenous people complete university education, they are largely excluded from this sector of the economy. With education becoming an increasingly critical component to accessing the employment and income benefits of advanced modern economies, the effects of these compositional changes may have offset any gains from social policy investments in closing socio-economic gaps.
Reducing these gaps means addressing a complex set of issues. Increasing educational attainment requires appropriately resourced education support beginning in early childhood, sustained throughout regular schooling and into vocational and higher education settings. These programs should support Indigenous peoples' aspirations, including the maintenance of cultural integrity [19]. Factors beyond the school gate that support Indigenous engagement with the education process are also critical. We know that pathways to disadvantage in education begin in the early years, with high proportions of Indigenous children already behind their non-Indigenous peers in academic performance from their first year in school-a deficit that continues throughout primary and high school [20]. Higher rates of school absenteeism and lower levels of parental education may contribute to the widening disparity in academic performance over time for Indigenous children, and the resources and role models for scholastic learning that often exist in non-Indigenous homes may be largely absent in many Indigenous households [21].
Given these trends, closing the higher education gap between Indigenous and non-Indigenous young people will require a major change in policy approach, and patience. It must be recognized that changes made today to improve young people's readiness for school will take years to result in higher rates of university completion. This suggests that flow-on effects of higher employment and incomes may be even further away. Any suggestion that gaps in socioeconomic outcomes can be eliminated in the near future seems unrealistic. Without significant increases in the proportion of young Indigenous people completing higher education, these gaps will remain indefinitely. In developed economies, population-wide improvements in income are mostly related to improvements in educational achievement and opportunities for employment. Our study suggests both Canada and New Zealand are starting to improve income disparity issues for their Indigenous people, though each is still some way from achieving parity. The situation for Indigenous Australians is far less encouraging.
The health and wellbeing of Indigenous populations in these countries has been a key aspect of national public policy for some time. In addition to important legal changes regarding the recognition of traditional rights, governments have engaged in various efforts to improve conditions for Indigenous peoples, including education, health and employment programmes, and policy changes. For example, most recently, the government of Australia has made "closing the gap" in human development outcomes between Indigenous and non-Indigenous people an explicit goal of national policy [22], and the Government of Canada and Assembly of First Nations' Joint Action Plan has a focus on increasing access to education and employment opportunity [23], while New Zealand has used a "closing the gaps" theme for policies aimed at social justice issues for Māori [24]. Adding to this already complex policy environment is the observation that these countries have seen some growth in Indigenous populations across the reference period, in addition to that from births, due to changing patterns of self-identification in their census [25][26][27].
---
Limitations
There are several limitations to the methodology employed in this study. It is known that across time there has been a change in the propensity of people to identify as Indigenous in all three countries [25][26][27]. This means, for example, that the composition of the Indigenous population of 1981 is likely different to that of 2006 for all age groups, which may have influenced some of the results seen in this study. Another issue is that the scope of national census questions may be too limited to explain some of the differences in outcomes between Indigenous and non-Indigenous persons. For example, there may be sound cultural reasons for why an Indigenous person does not seek to participate in certain educational or employment spheres, but we can't measure that with census data. Lastly, as census data are only gathered once every five years we are unable to track economic and social change as closely as something like a longitudinal survey with annual follow-up.
---
Conclusions
Australia, Canada, and New Zealand represent nations with some of the highest levels of human development in the world, yet our research shows that their Indigenous populations were almost as disadvantaged in 2006 as they were in 1981, relative to their non-Indigenous populations, on three key social determinants of health. These ongoing disparities represent a major public policy concern, and a growing focus for science and human rights organizations. Given the breadth of scientific inquiry, the public spending and good intentions of successive Australian, Canadian and New Zealand governments regarding Indigenous health and social advancement since 1981, the fact that relative progress on key social determinants of health has been practically static for Indigenous peoples is alarming. Despite absolute improvements on these indicators, continuing disparities suggest that existing approaches to addressing Indigenous inequality are not as effective they need to be. They also suggest that achieving equity may take several more decades, especially as the young adult populations described here are the ones in which more progress was expected to have occurred across these domains. Surely Indigenous peoples in these nations would be within their rights to expect a narrowing of these gaps to occur over the coming 25 years, along with improvements in health outcomes. Science and policy are yet to provide viable solutions to this enduring social equity issue. If "closing the gap" in health and socio-economic disparity between Indigenous and non-Indigenous people remains a goal, it would seem that completely new approaches are required to achieve success, otherwise Indigenous persons in these developed nations are being consigned to a future of entrenched inequality for generations to come.
---
Competing interests
The authors declare that they have no competing interests.
Authors' contributions FM and MC had the original idea for the study, developed the analytic concept, and acquired the data. DP and EM compiled the data and performed the analysis. FM, MC and DP wrote the first draft. DL, EG, and SRZ contributed to all subsequent drafts and revisions. All authors read and approved the final manuscript. |
Managing natural processes at the landscape scale to promote forest health is important, especially in the case of wildfire, where the ability of a landowner to protect his or her individual parcel is constrained by conditions on neighboring ownerships. However, management at a landscape scale is also challenging because it requires cooperation on plans and actions that cross ownership boundaries. Cooperation depends on people's beliefs and norms about reciprocity and perceptions of the risks and benefits of interacting with others. Using logistic regression tests on mail survey data and qualitative analysis of interviews with landowners, we examined the relationship between perceived wildfire risk and cooperation in the management of hazardous fuel by nonindustrial private forest (NIPF) owners in fire-prone landscapes of eastern Oregon. We found that NIPF owners who perceived a risk of wildfire to their properties, and perceived that conditions on nearby public forestlands contributed to this risk, were more likely to have cooperated with public agencies in the past to reduce fire risk than owners who did not perceive a risk of wildfire to their properties. Wildfire risk perception was not associated with past cooperation among NIPF owners. The greater social barriers to private-private cooperation than to private-public cooperation, and perceptions of more hazardous conditions on public compared with private forestlands may explain this difference. Owners expressed a strong willingness to cooperate with others in future cross-boundary efforts to reduce fire risk, however. We explore barriers to cooperative forest management across ownerships, and identify models of cooperation that hold potential for future collective action to reduce wildfire risk. | Introduction
Boundaries: fires don't understand them. We can't draw a line and say we did our part up to this point, and now we are good…It's just a bigger picture. This forest landowner from eastern Oregon recognizes that fire occurs on a landscape scale. Although he believes people need to manage fire risk beyond their property lines, he has not cooperated with any of his neighbors to address hazardous fuel conditions locally. ''We communicated with them…but they have their own balance of what they want to do,'' he explained, referring to gulfs in values and priorities for forest conditions and management. This landowner thins thickets of trees but leaves brush for deer forage. He is concerned that one of his neighbors eliminates too much habitat in his efforts to reduce fuel, while another does nothing.
The importance of managing natural processes and biodiversity at the landscape scale to promote the health and productivity of forest ecosystems is widely recognized (e.g., Lindenmayer and Franklin 2002). Doing so, however-especially when it entails managing across ownership boundaries-remains challenging. Different land ownerships, public and private, are managed for different goals using different actions, with differing ecological effects (Landres and others 1998). In the case of fire, hazardous fuel reduction on one ownership can reduce the risk of fire on neighboring lands. Similarly, suppression activities on one ownership can cause fire to be excluded from another ownership, causing fuel buildups that can lead to uncharacteristically severe fires having dire social, economic, and ecological consequences. Where management activities have ecological, economic, or social consequences beyond ownership boundaries, and the efficacy of one landowner's actions can be limited or improved by those of nearby landowners, cooperation can be an important strategy for achieving landscape-scale management goals (Yaffee and Wondolleck 2000). Cooperation is also an alternative to regulation for the management of common pool resources such as forests; local residents who develop voluntary, selfregulating management institutions may have greater expertise and incentive for managing these resources effectively than regulatory agencies (Ostrom 1990). Yet the decision to cooperate with others hinges on a balance between altruism and self-interest, and in this case, on whether landowners are willing to accept the immediate burden of cooperating with others in exchange for the longer term, but less certain, benefit of buffering their properties against fire.
In this paper we explore the relationship between nonindustrial private forest (NIPF) owners' perceptions of fire risk, including risk associated with conditions on nearby forestlands (landscape-scale risk), and their decisions to treat hazardous fuel in cooperation with others. Our study area is the ponderosa pine (Pinus ponderosa) ecotype on the east side of Oregon's Cascade Mountains, where a history of fire suppression, grazing, and timber harvest has led to a buildup of hazardous fuel and thus, fire risk (Hessburg and others 2005). Although this area is dominated by federal lands, NIPF owners own 1/6th of the forestland in the area. Much of their land borders or is near federal land, creating a mixed-ownership landscape in which their management practices affect the connectivity of fuel, and potential movement of fire, between federal wildlands and populated areas (Ager and others 2012).
Given that fire does not observe ownership boundaries, and that fuel conditions on one ownership can affect fire risk on neighboring ownerships, we hypothesized that owners who perceive a risk of wildfire to their properties, and perceive that conditions on nearby forestlands contribute to this risk, are more likely to cooperate with others to reduce fire risk across ownership boundaries. We expected owners to be motivated by the rationale that cooperation would enable them to accomplish fuel reduction activities more efficiently together than alone. Yet we also expected that social beliefs and norms about cooperation and private property ownership would influence owners' decisions to treat fuel through cooperation with others.
We investigated the relationship between risk perception and cooperation through statistical analysis of mail survey data. We used qualitative interview data to examine how NIPF owners perceive fire risk on their own properties and on the wider landscape, and communicate and cooperate with other private and public owners to address fire risk. Interview data also allowed us to explore the influence of individual beliefs, social norms, and institutions on cooperative fuel treatments, and to identify potential models of cooperation. After presenting our results, we discuss barriers to cross-boundary cooperation in hazardous fuel reduction and ways to potentially overcome them. The ecological and socioeconomic conditions prevalent in our study area are common throughout the arid West. Thus, this case from eastern Oregon may shed light on opportunities for managing fire-prone forests using an ''all lands approach'' elsewhere in the West.
---
Literature Review
---
Risk Perception
Risk perception, defined as the ''subjective probability of experiencing a damaging environmental extreme'' (Mileti 1994), is considered an important antecedent to mitigation and adaptation behavior according to the natural hazards literature (Paton 2003). In the case of wildfire and other natural hazards, risk perception has been identified as a key variable influencing mitigation behaviors such as taking action to reduce hazardous conditions, preparing for a hazardous event, or moving to a less hazardous area (Dessai and others 2004;Grothmann and Patt 2005;Amacher and others 2005;Niemeyer and others 2005;Jarrett and others 2009;McCaffrey 2004;Fischer 2011;Winter and Fried 2000).
People form perceptions of risk through interaction with friends, peers, professionals, and the media on the basis of norms, world views, and ideologies (Douglas and Wildavsky 1982;Berger and Luckmann 1967;Tierney 1999). The process of coming to agreement on the causes and consequences of risk, and acceptable levels of uncertainty and exposure, is influenced by the level of legitimacy and trust between people and institutions (Slovic 1999). Cognitive biases (e.g., discounting future events, giving disproportionate weight to vivid or rare events, and denying risk associated with uncontrollable events) also play a role in risk perception (Maddux and Rogers 1983;Slovic 1987;Sims and Baumann 1983), as can people's past experience and objective knowledge (Hertwig and others 2004).
However, risk perception alone does not always compel mitigation behavior. Other important variables include believing one is capable of acting to effectively mitigate risk, holding oneself responsible for one's welfare, and feeling sentimental attachment to a vulnerable community or place (Paton 2003). Moreover, decisions to mitigate risk occur under complex socioeconomic conditions that both shape people's vulnerability to risk (Slovic 1999), and determine their efficacy at addressing risk (Slovic 1987;Maddux and Rogers 1983;Tierney 1999).
---
Cooperation
Cooperation refers to a spectrum of behaviors that range from communicating with others about shared interests to engaging in activities that help others, including sharing resources and work (Yaffee 1998). The theory of cooperation is based on the benefits of reciprocity to participating parties when combined efforts can achieve more than individual efforts. Disciplines ranging from evolutionary biology to political science have examined cooperation as a response to adverse and unpredictable environments, and as a strategy for hedging against and coping with environmental risk (Andras and others 2003;Ostrom 1990;Cohen and others 2001;Axelrod and Hamilton 1981). Social conditions that foster cooperation among individuals include the presence of common goals and motivations, a perception of common problems (including risks), the use of similar communication styles, high levels of trust, and expectations and opportunities for frequent exchanges of information and ideas (Yaffee 1998;Bodin and others 2006;Ostrom 1990). Policy environments, land tenure arrangements, and power relations must also be conducive to cooperation (Ostrom 1990;Bergmann and Bliss 2004).
Three important antecedents to cooperation, including cross-boundary cooperation among private landowners, are shared cognition, shared identity and legitimacy (Rickenbach and Reed 2002;Gass and others 2009). Shared cognition refers to sharing a similar perspective or having consensus on a problem or task (Bouas and Komorita 1996;Swaab and others 2007). Shared identity means sharing membership in a community or social group (Tyler 2002;Tyler and Degoey 1995;Swaab and others 2007). Legitimacy is when people or organizations are viewed as fair and capable and are empowered by others (Tyler 2006).
Social exchange theory provides a framework for understanding when cross-boundary cooperation by NIPF owners might occur. Social exchanges are interdependent interactions among people that generate mutual benefits and obligations. One type, ''reciprocal exchanges'', consists of interactions that lack terms or assurance of reciprocation (Blau 1964). Reciprocal exchanges are an informal form of cooperation that functions on the basis of reciprocity rules (an action by one party leads to an action by another party), beliefs (that people who are helpful now will receive help in the future), and norms of behavior (that people should reciprocate based on social expectations) (Molm 1994;Cropanzano and Mitchell 2005). Reciprocal exchanges entail risk and uncertainty because they occur in the absence of a contract. When they are successful, they yield trust and commitment, which in turn lead to stronger relationships (Blau 1964). When they are unsuccessful, cooperation breaks down. In contrast, ''negotiated exchanges'' are social exchanges that have known terms and binding agreements to provide some assurance against exploitation (Coleman 1990). Negotiated exchanges do not entail as much risk or require as much trust as reciprocal exchanges (Molm and others 2000).
The risks associated with cooperation increase when ''mismatches'' occur between the nature of the relationship among the cooperators and the nature of the transaction between them (Cropanzano and Mitchell 2005). For example, when two landowners who have an interpersonal relationship (one that depends on obligations, trust and interpersonal attachment) engage in an economic exchange (an exchange of goods or services), there is a mismatch. In such cases, people who act to the economic benefit of others may feel betrayed if that economic benefit is not reciprocated, and may be reluctant to enter into another such relationship. Thus, neighboring landowners who have an interpersonal relationship and who cooperate in fire risk reduction activities-which are economic because they entail investment of one person's resources in the protection of another's property-have a mismatch, exacerbating the risks associated with cooperation. We return to these observations in our Discussion.
---
Methods
---
Definitions
Our construct of wildfire risk perception among NIPF owners includes concern about a wildfire occurring on one's land, and concern about hazardous fuel conditions on nearby private or public land contributing to the chance of wildfire on one's land, based on Mileti's (1994) definition of risk perception as subjective probability. We also included awareness of the ecological role of wildfire in ponderosa pine forests, and past experiences with wildfire on one's property as elements of our risk perception construct based on Hertwig and others (2004). For purposes of our analysis, we define cooperation as jointly planning, paying for, or conducting activities that reduce hazardous fuel. We focus on cooperation among NIPF owners, and between NIPF owners and public agencies.
---
Data Collection
In September 2008 Oregon State University and Oregon Department of Forestry funded and administered a mail survey to owners of a random sample of NIPF parcels in eastern Oregon's ponderosa pine ecosystem. The goal of the survey was to learn more about NIPF owners' wildfire management practices, constraints on fire management, and how public agencies could design better assistance programs.
The survey sample was selected by casting random points across a GIS polygon created using layers of pixels that represent historical and potential ponderosa pine forests (Grossmann and others 2008;Ohmann and Gregory 2002;Youngblood and others 2004) and an ownership layer (Fig. 1). The NIPF polygon comprised approximately 1.2 million hectares, about 50 % of all NIPF land and 15 % of all forestland east of the Cascade Range in Oregon, which is consistent with other estimates of the proportion of land in NIPF ownership in eastern Oregon (Oregon Department of Forestry 2006). The point layer was joined with a state tax lot layer obtained from the Oregon Department of Revenue to create a list of owner names, addresses and tax lot numbers.
The survey asked about owners' past (2003)(2004)(2005)(2006)(2007)(2008) and intended future (2008)(2009)(2010)(2011)(2012)(2013) hazardous fuel reduction activities, including cooperation with public agencies, nonprofit organizations, private consultants or other private landowners. Survey questions also addressed owners' goals, experiences with wildland fire, concern about fire risk in general, concern about specific hazards and potential losses, and demographic characteristics. Respondents were asked to reference the parcel associated with the tax lot number on their survey. The survey was reviewed by 20 natural resource professionals, landowners, and social scientists and approved by the Oregon State University Institutional Review Board prior to implementation.
The survey was administered to 1,244 owners using the total design method (Dillman 1978): an announcement card, followed five days later by the survey; a second survey to non-respondents 2 weeks after the first; and at week four, a thank you card that also served as a final reminder to non-respondents. Of the 1,244 surveys mailed, The survey respondents consisted mostly of retirementage males, similar to NIPF owners in the American West (Butler and Leatherberry 2004), but more had obtained bachelor's degrees, earned above the national median household income ($50 K), and were absentee (Butler and Leatherberry 2004). Also, a high proportion had treated their parcel to reduce the risk of wildfire compared to owners in the West generally (Brett Butler, unpublished National Woodland Owner Survey data 2006). They also owned relatively large holdings compared to other owners in the West (Butler and Leatherberry 2004). These disparities reflect the sampling approach (based on forestland, not forest owners), and the social and biophysical conditions in eastern Oregon where land use rules set large minimum tax lot sizes, and arid climate limits productivity, favoring forestry and grazing over large areas. These and other characteristics of the sample are presented in Table 1.
We conducted semi-structured key informant interviews in 2007 and 2008 with a purposive sample of 60 NIPF owners owning forestland in three watersheds in the study area that are considered high priority for hazardous fuel reduction (Oregon Department of Forestry 2006): the Sprague, Upper Deschutes, and Upper Grande Ronde (Fig. 1). We identified owners having diverse fire experiences, management intensities, and ownership characteristics with help from local natural resource agencies and organizations. Each interview included a walking tour of the owner's property and averaged two hours. Questions addressed their management approaches, experiences and concerns with fire, ecological knowledge and values about fire and forest conditions, and perceptions of opportunities and constraints for hazardous fuel reduction. Most interview informants had treated some portion of their parcel to reduce the risk of wildfire. Digital recordings of the interviews were transcribed verbatim and entered into Atlas.ti, a software program that aids qualitative data analysis. The interview sample was similar to the survey sample in terms of demographic characteristics.
---
Data Analysis
To analyze the mail survey data we used frequencies to describe respondents' perceptions of fire risk and their cooperation behaviors, and logistic regression to identify the relationship between risk perception, and cooperation on fuel reduction. We began the logistic regression analysis with a manual backward stepwise regression of the cooperation variables on the risk perception variables and a set of demographic control variables, and then built final models with the variables that were relevant to the hypothesis. Table 2 contains descriptions of the cooperation response variables and risk perception explanatory variables.
To analyze the interview transcripts we followed a standard protocol of qualitative analysis (Patton 2002). We identified and coded quotations in the transcripts that provided evidence for how interview informants perceive fire risk, including the probability of fire, the hazardous conditions that contributed to the probability of fire, and what values they were concerned about losing in the case of fire. We also coded quotations that provided evidence for how owners view the barriers and opportunities of cooperation. We linked these quotations with additional codes and wrote memos about how wildfire risk perceptions motivated owners to cooperate with others.
---
Results
---
Risk Perception and Hazardous Fuel Management
We are always concerned about fire. Our fear every summer is where is the lightning strike going to be and are we going to be able to survive the fire? That is one of the reasons we created fire breaks throughout the property, and because our neighbors didn't have any.
Comments like this one indicate that some landowners interviewed were aware of fire risk beyond their property boundaries, and responded by treating fuel. Survey responses corroborated this finding. 67 % of the survey respondents said they were concerned about a fire affecting their property. A majority (53 %) were concerned about conditions on nearby public lands contributing to the risk of wildfire on their property. Interview informants articulated similar concerns, although few were aware of which land management agency controlled nearby public lands. ''You want to see risk? There's risk,'' responded one interviewee when asked for an example of hazardous forest conditions. Like many owners we interviewed, he pointed to land on the other side of his fence line, in this case national forest land in the Sprague River Watershed. ''Here you can see where it is thinned and then it gets really thick; that is a piece of government ground. That is the difference between my place and the government ground; theirs is jungle.'' Figure 2 shows forest conditions we often encountered across property lines owners shared with federal land management agencies. Some owners were also concerned about fuel conditions on neighboring private lands, as evidenced in this comment by another interviewee from the Sprague River Watershed: ''That is an inferno waiting to happen…He's endangering my property, my structures, and also my forest''. However, owners were less concerned about conditions on nearby private lands than on nearby public lands. Only 37 % of survey respondents were concerned about fire risk from nearby private lands. Some interview informants believed that most private owners managed their forests enough (i.e., thinned and harvested) that little fuel was left to be of consequence. ''They are logging the living daylights out of that,'' exclaimed one interviewee, referring to the surrounding industrial ownership. ''It's going to be fine for a lot of years.'' Other interviewees were simply more forgiving about the risk associated with private lands than with public lands. One owner guessed that her neighbors ''are doing fine…doing it about the same way we are: thinning, logging it every few years…The cattle are keeping the brush down.'' 70 % of the survey respondents had treated portions of their parcels to reduce the risk of fire between 2003 and 2008. They used a range of forest management practices that can reduce fuel, presented in Table 3. The median treatment area was 20 acres (interquartile range = 1-120 acres). Many interviewees said that they treated their properties to compensate for the lack of hazardous fuel management by their neighbors. As one owner in the Sprague River Watershed explained, If we have a higher risk because of heavy fuel buildup on adjacent land…we look at our management philosophy a little bit differently. We would do more in our cutting, more than we like…to keep a crown fire from spreading. Indeed, in a different analysis of the survey findings we found that owners' concern about fire risk, and concern about conditions on nearby public land contributing to this risk explained their likelihood of treating fuel (Fischer 2011).
---
Risk Perception and Cooperation
Most owners worked either on their own or with family members, or with private contractors to conduct forest management activities. However, many had also worked in cooperation with others. Between 2003 and 2008, 34 % of the survey respondents cooperated with public agencies, 18 % cooperated with other private owners, and 15 % cooperated with nonprofit organizations to plan, pay for, and/or conduct practices that can reduce fuel (Table 4).
Interview informants provided examples of cooperative fuel treatment, particularly with public land neighbors: participating in fire management planning with the Forest Service and the Bureau of Land Management for lands adjacent to their properties; communicating with agencies about the need to reduce fuel along shared property boundaries; coordinating forest thinning and brush-clearing with treatments on adjacent public lands to widen fuel breaks; and synchronizing prescribed burns with those on adjacent public lands to take advantage of agency fire fighters and equipment.
Interview informants cited fewer examples of cooperation with private landowners. These included allowing neighbors to graze livestock on their properties to reduce grass and brush, and planning treatments along shared property boundaries to create wider, shared fuel breaks. More often they observed the use of new techniques or equipment on each other's parcels. A number of owners said they had referred interested neighbors to their consulting foresters or operators to request treatments similar to the ones performed on their properties. Thus, some portion of the 41 % of survey respondents who had worked with private contractors may have been influenced by, or influenced, other private owners, an indirect form of cooperation.
Owners expressed a greater willingness to cooperate with other landowners in the future to reduce fire risk than they had in the past. Most survey respondents said they would cooperate with both public owners (68 %) and private owners (75 %) to reduce fuel in the future, especially if it would release them from liability for fires resulting from escaped controlled burns, reduce their share of the cost of treatments, or make more public funding available to them for treatments (Table 5).
According to the logistic regression tests, perceived risk explained cooperation between NIPF owners and public agencies, but not cooperation between NIPF owners and other private owners. Concern about a fire occurring on one's parcel, and concern about conditions on nearby public land contributing to this risk were both associated (P B .08) with whether owners reported having cooperated with public agencies in the past on forest management actions that can reduce fuel. Whether owners were aware of the historical role of fire in ponderosa pine ecosystems, and whether owners had experienced a fire on their land were also associated (P B .05) with whether owners reported cooperating with public agencies in the past to reduce fire risk. Owners' willingness to cooperate with public agencies in the future to reduce fire risk was also explained by the risk perception variables; specifically, whether owners were concerned about a fire occurring on their parcel (P B .05), were concerned about conditions on nearby public lands and private lands (both at P B .05), and were aware of the local fire ecology (P B .05). None of the risk perception variables were associated with whether owners had cooperated with other private owners in the past. Only awareness of the local fire ecology was associated with their willingness to cooperate with other private owners in the future (P B .01). P values and odds ratios for the risk perception variables are presented in Table 6. In addition, two demographic control variables were significant in preliminary manual backwards stepwise regression tests: living on one's parcel and age were associated (P B .05) with whether owners had cooperated in the past and were willing to cooperate in the future with both public agencies and other private owners, whereas parcel size, ownership size, tenure length, income, education and gender were not.
Our logistic regression test partially confirmed our hypothesis (owners who perceive a risk of wildfire to their properties, and perceive that conditions on nearby forestlands contribute to this risk, are more likely to cooperate with others to reduce fire risk across ownership boundaries). All of the variables included in our risk perception
---
Barriers to Cooperation
Although many of the owners interviewed acknowledged the potential benefits of cooperation in fuel reductionparticularly for achieving economies of scale in their efforts-they identified numerous reasons for not cooperating. Barriers related to patterns of rural social organization were most commonly cited. ''People in the timber sector are in an isolated spot,'' explained an owner of 2,500 acres in the Sprague River Watershed, referring to the sparsely populated and mountainous landscape of Oregon's east side, which impedes interaction. ''[They] don't have many neighbors [to cooperate with].'' Furthermore, the markets and other natural resource-based economic activities that once provided a basis for interaction and reciprocity despite this topography are now in decline. An owner of 10 acres who recently moved to Union County in the Upper Grande Ronde Watershed explained:
When this place was small family ownerships primarily there was more talk between people and more helping each other out because they were all managing the land. Now people aren't really deriving a significant amount of their income off the land…So they don't tend to talk to each other or help each other out much.
As a result of demographic change, many newcomers own forestland primarily for privacy and solitude (Kendra and Hull 2005) or recreation. The isolation such owners seek counters interaction. ''We're like two separate little icebergs…we may touch…but only by necessity…it's why we live out here,'' explained an owner of 200 acres in the Deschutes River Watershed. A high rate of absentee ownership (74 % in our survey sample), often associated with recreational use, is a barrier to developing the social relationships upon which cooperation is predicated. Our regression results indicated that owners who live on their parcels were more likely to have cooperated with their neighbors in forest management than those who did not.
In addition, gulfs in values, beliefs, and motivations regarding the management of fire risk, also attributable to demographic change, were seen as barriers to cooperation. Owners who manage for commodities or habitat tended to view fire as a historically important and persistent ecological force. They believed hazardous fuel needed to be managed to prevent fire from being overly destructive, but did not seek to eliminate fire from the ecosystem. In contrast, owners who hold land primarily for residential reasons tended to view fire as a threat to their homes and scenic views, defining hazardous fuel as anything in the forest that could carry fire. Differing perceptions of fire and fuel led to conflicting approaches to forest management. For example, the owners of a 200-acre parcel in the Deschutes River Watershed selectively treated the most hazardous fuels in order to preserve wildlife and scenic beauty, differentiating themselves from their neighbors who razed all vegetation (apart from large overstory trees) within a 150-yard radius of their future home.
We understood their fire concerns, but we were also very concerned about how much they cleared out of the winter forage for the deer…We don't want to see our forests be safe for wildfire but good for nothing else.
Conflict was especially apparent around fire treatments (conducting controlled burns, burning slash piles, and allowing naturally ignited fires to burn on one's property). Some interviewees viewed fire as a tool for reducing risk associated with brushy, overstocked stands; others viewed fire as the risk itself. An owner of 10 acres in the Sprague River Watershed who managed primarily for habitat had permission to clear and burn brush on the property of his absentee neighbor. However, another neighbor with less risk tolerance stymied his efforts. ''We had good conditions for burning,'' he explained. ''There were still snow drifts! Then these neighbors noticed what I was doing, got on the phone and threatened legal action. One guy threatened to kill me because they were so scared…And if you drive back there now you will see how much fuel there is; it's scary.''
Conflicting values and goals relating to fire risk also impeded cooperation between NIPF owners and public land management agencies. An owner of 2,500 acres in the Sprague River Watershed was disappointed about a prescribed burn he had jointly conducted with the Forest Service, and attributed the problem to differing scales of risk tolerance. He believed the Forest Service was comfortable losing more trees in the burn than he was: They were comfortable with a hotter controlled burn…than I was used to…For them this kind of mortality is nothing. They are dealing with thousands of thousands of acres. But when you [have] a limited number of acres, mortality has a different meaning.
Social norms about private property ownership and appropriate behavior towards neighbors were also identified by owners as constraints to cooperation, despite concerns about hazardous fuel conditions on neighbors' lands. ''I kind of try to hint to them,'' said one interview informant, when asked why he hadn't encouraged his next door neighbor to address hazardous fuel on his property. ''But that is about as far as you can go because people are set in their ways.'' The owner of 1,000 acres in the Upper Grande Ronde River Watershed was more direct: ''If you want to have good neighbors you don't mention things like that.'' Social norms about reciprocity, including the age-old challenge to collective action, free-ridership, also worked against cooperation. ''The trouble with our society,'' explained an owner in his 80s who controls hazardous fuel on his property despite being handicapped ''is that one person can do the work…and other people will take the benefit.'' In other words, if your neighbors reduce fuel on their properties, the risk to your property will be reduced without you having to do anything.
Owners were also concerned about potential risks to their autonomy as private property owners associated with participating in formal cooperative groups. For example, an owner of 650 acres in Klamath County recounted, I have seen people-good friends-who aren't speaking to each other today because they are in a big old group…It's no longer: 'Hey, Joe, come on over and help me fix my irrigation and I will come help you fix yours.' It's: 'No I can't come over because you have an inch more water than I do, and I don't want to sue you about it.'-I don't want to get into no organization.
Owners were also worried about participating in formal groups that include public agencies because of bureaucratic or regulatory burdens that might be imposed on them, and the discomfort of unequal power relationships. An owner of 200 acres in the Deschutes River Watershed, who had experienced frustration cooperating with federal agencies on fuel reduction and fish passage activities, explained: ''it doesn't feel good when you are feeling the heavy hand of government coming in saying you shall do this!'' Nevertheless, about half of survey respondents declared membership in formal, natural resource-related groups (Table 7).
Finally, some owners mentioned laws that counter cooperation. The risk of being legally liable for fires or injuries resulting from negligent conditions or activities on one's property discourages many owners from cooperating on fuel reduction work. ''The problem is the law and the way liability is written,'' explained one owner. ''Nobody wants to be responsible.''
---
Opportunities for Cooperation
We asked interviewees to describe cooperative arrangements for fuel reduction that would be amenable to them, based on their observations or experiences, and grouped their responses into three informal and three formal models that we then named.
In the informal, ''over the fence'' model, interviewees described landowners observing each other's activities and doing something similar, or encouraging other landowners (often public agencies) to do more. Interviewees also suggested that owners could also jointly identify an issue that affects them and address it together (e.g., creating a fuel break). In the informal ''wheel and spoke'' model, contractors and other natural resource professionals help multiple nearby landowners learn indirectly from each others' experiences, leverage financial resources, and access markets and fuel reduction services, without negotiating terms of cooperation among the landowners involved. In the ''local group'' model, interviewees described local change agents creating a forum in which landowners come together to address a common problem (e.g., the accumulation of hazardous fuel on nearby public lands). This informal process can lead to communication, cooperation, learning, and eventual leadership among members of the group. A number of interviewees claimed that informal models of cooperation are more effective than formal models because they don't impose terms or require reciprocation, which can create adversarial relationships by establishing expectations.
Other landowners interviewed believed formal models of cooperation were more efficient and productive than informal models. In the ''agency-led'' model, interviewees described local natural resource management agencies providing education, technical, or financial support to help landowners learn from each other and interact around management activities; or, public funds so that landowners can implement fuel reduction themselves. In the ''collaborative group'' model, participants commit to a process and a product, are organized by a coordinator, and are guided by policy documents. Few owners had experience with formal ''landowner cooperatives''. However, some proposed this model whereby groups of landowners would pool harvests and develop contracts with processers, working through a common contractor to increase their leverage in marketing biomass and small-diameter logs.
---
Discussion
Cooperation is predicated on the benefits of reciprocity. People's perceptions of risk can determine how they weigh the benefits and costs of working with others. This study finds that the majority of NIPF owners in Oregon east of the Cascade Mountains are concerned about fire risk to their properties, and beyond their property boundaries at a broad scale. Those who have cooperated with others in forest management activities that can reduce hazardous fuel are in the minority, however. Concern over fire risk did not appear sufficient to warrant cooperation with other private landowners in particular. Of course, some owners may lack concern about forest conditions on other private properties; a smaller proportion of owners were concerned about hazardous fuel conditions on nearby private lands than on public lands. And, some owners felt protected by heavy management on nearby private ownerships, especially industrial holdings. Nevertheless, roughly one-third of owners were concerned about the fire risk associated with other private ownerships, and the majority were willing to cooperate with other private owners in the future to mitigate that risk. That they have not acted on their concern in the past by trying to influence fuel conditions around them through coordinated planning and treatments with neighbors highlights the importance of other forces that work against cooperation. Here we draw on the literature presented earlier in this paper to discuss possible reasons for the disjuncture between NIPF owners' ideals and behaviors regarding cooperation.
---
Shared Cognition
Shared cognition is an antecedent to cooperation because it reduces the risk of participation. When parties to a collective effort perceive consensus among group members about the nature of the problem being addressed, the goals of the effort, and their commitment to the group, they are less likely to defect (Bouas and Komorita 1996;Swaab and others 2007). Although most NIPF owners surveyed perceived fire risk, it was clear in interviews that they did not hold common perceptions of wildfire, risk, or hazardous fuel. This lack of perceived consensus around the constructs of risk and hazard may hinder joint planning and implementation of fuel reduction activities. Some owners An organization in at least one of the above categories 52.1
attributed their reluctance to cooperate to conflicting values and goals regarding forest conditions and perceptions of fire hazard and risk. However, awareness of fire as an important local ecological process was a predictor of willingness to cooperate with other private and public forest owners, suggesting that owners who share this view are more likely to cooperate. Social exchange theory suggests that without shared beliefs about the probability and nature of fire risk, hazard, and the risk-reducing benefits of cooperation, owners may face difficulty rationalizing efforts to engage in potentially burdensome social relationships (Cropanzano and Mitchell 2005). This observation echoes what scholars of cooperation in the context of natural resources have argued: without a vision of a common problem or a common future, there is little reason to work together (Ostrom 1990;Yaffee 1998). Other studies of private forest owners have reached similar conclusions about the relationship between congruency of perceptions, attitudes and values, and joint planning (Rickenbach and Reed 2002;Jacobson and others 2000;Gass and others 2009).
---
Group Membership
The constraints to cooperation that NIPF owners described in interviews were predominantly related to social organization: spatial isolation, a dearth of integrating economic activities, and social norms that inhibit communication and reciprocity among neighbors about fuel reduction. Survey findings that three-quarters of owners do not live on their properties provide additional evidence that social organization is a constraint on cooperation. Rural sociologists documented early on how topographical relief and spatial isolation influence social organization, and how resulting social relations affect the development of sociability (Field and Luloff 2002). Rural residents in eastern Oregon are spread out and isolated from each other. Interview informants perceived this isolation as an impediment to sociability, and in turn, cooperation.
Owners described the deterioration of rural, natural resource-based economies as a barrier to cooperation. Although formal cooperatives have never been pervasive among NIPF owners in the West (Kittredge 2005), agricultural cooperatives have served the practical need of connecting isolated rural residents with external markets, political processes, and each other (Hobbs 1995). With the decline in timber, cattle and other commodity markets, the basis for interaction and reciprocity among rural landowners in eastern Oregon has become scarce. Moreover, as communities of place are being incorporated into wider market economies and supplanted by social networks that are not geographically based, people may be less inclined to rely on local residents and resources (Brown 1993). Some theories suggest that less bounded contexts discourage cooperation because individuals are less likely to anticipate reciprocity due to remote relationships (Cohen and others 2001).
The demographic change associated with this shift in the rural economy may be further alienating landowners. In some areas of Oregon's east side, affluent, retired, and otherwise mobile urbanites have migrated to rural areas for their amenities, bringing new values and expectations for land that can come into conflict with those of locals (Egan and Luloff 2000). The more recent rise of property individualism (Singer 2000) and increasing focus on privacy among forest owners (Butler 2008) also run counter to cooperation. Landowners' fears of losing autonomy or control of their properties have been well-documented (Ellefson 2000;Fischer and Bliss 2009). For some, sharing information or inviting people over to discuss forest conditions and management may contradict values for privacy. Even poking one's head over a fence to comment on conditions about which one is concerned is an invasion of privacy, as evidenced in the adage ''good fences make good neighbors.'' Without membership to a common community or social group, landowners lack the structural and cultural basis for developing norms of reciprocity. Without interaction, they lack capacity to communicate and social mechanisms for developing trust among individuals. These are key conditions for cooperation (Ostrom 1990;Yaffee 1998;Tyler and Degoey 1995). Lack of group identity not only reduces interaction among landowners, it may also cause the lack of shared cognition about wildfire risk that owners said make cooperation difficult.
---
Legitimacy
Although we found that some cooperation among private forest owners and public agencies occurs, many owners we interviewed reported cumbersome bureaucratic processes, corrosive expert-lay person relationships, and a lack of trustworthy leadership in natural resource management efforts that involved public agencies, which discouraged them from cooperating. Other research has shown that NIPF owners' concerns about allowing government representatives onto their property, and agreeing to accept agency assistance lead to struggles over private property rights and undermine cooperation (Fischer and Bliss 2009). These concerns arise from owners' perceptions of the legitimacy of public agencies. If people view an institution as legitimate they develop a voluntary sense of obligation to obey decisions, follow rules, or abide by social arrangements rather than doing so out of fear of punishment or anticipation of reward (Tyler 2006). This feeling of obligation is essential for successful cooperation.
---
Risks and Benefits in Social Exchange
Survey results indicated that cooperation in fire hazard reduction does not occur frequently among private owners, yet many of the owners we interviewed said they communicated and cooperated frequently with other owners to address other land management problems. This discrepancy provides evidence that cooperation on fuel reduction depends on the benefits of social exchange outweighing the costs. In reciprocal social exchanges, the risk of betrayal is high (Cropanzano and Mitchell 2005). The potential for misunderstanding or failure to meet expectations of reciprocity may explain why owners infrequently cooperated with each other, despite a future willingness to do so. Perhaps some forms of cooperation-such as moving cattle and equipment onto each other's property, and suppressing fires that have ignited-have benefits that outweigh the risk and inconvenience of working together. In contrast, the benefits of cooperation in fuel reduction are less certain given the mismatch in the nature of the transaction. Furthermore, it may be easier for parties to agree about things like relocating cattle and suppressing wildfires (shared cognition), than about fire risk mitigation, which invokes judgments about how well people manage land and protect others from risk.
Although there are substantial risks associated with cooperation between NIPF owners and public agencies, these social exchanges are generally negotiated, with both parties agreeing to a set of rules regarding commitments and expectations. In addition, substantial incentives exist for private-public cooperation, for example, when federal agencies offer cost-share monies, administrative and technical support, and other opportunities. In contrast, few policies or programs encourage or reward cooperation among private owners. These factors may help explain why owners have cooperated more frequently with public agencies than with each other.
---
Models for Cooperative Wildfire Risk Management
The fact that so many owners expressed a willingness to cooperate with other private and public owners in the future despite limited past experience and recognized constraints; and the fact that about half already belong to organized, natural resource-related groups, suggests the potential for cooperation in landscape-scale forest management. Perceived fire risk alone may not compel owners to cooperate, but other policy and institutional incentives might. Interview informants identified a range of potential formal and informal models for cooperation. The tension between the informal and formal models lies in the need for flexible, low-pressure arrangements as well as coordination and efficiency. Some owners were willing to cooperate on an ad hoc basis; others wanted cooperation to be formally organized so that it would be efficient and ensure a benefit. Owners suggested that among neighbors, informal models may be preferable because they are less likely to make people feel rigid and defensive. Although owners described ''over the fence'', ''wheel and spoke'' and ''local group'' models, we found only a few examples of these models operating in the context of fuel reduction in our study.
Despite owners' beliefs about the importance of cooperation, and in light of the apparent lack of cooperation among owners, a less risky approach to cooperation among neighboring landowners may be one in which fuel reduction occurs through formal institutions (Cropanzano and Mitchell 2005). For example, the high cost of removing woody biomass and small-diameter logs, and lack of financial assistance and markets for this material are commonly identified barriers to fuel reduction (Fischer 2011). Formal institutional arrangements that enable owners to jointly apply for cost-share funds, coordinate treatments, and collectively offer biomass to the market could increase the economy of scale of management activities (Goldman and others 2007). Owners also identify liability and free ridership as drawbacks of cooperative fuel reduction. Formal institutions that coordinate management actions and pool risk can offer protection against liability and other risks associated with working with others (Amacher and others 2003).
Evidence exists for the emergence of new institutions that may offer an alternative path to addressing fire risk in Oregon and elsewhere in the western United States. Local collaborative institutions can provide an organized process for increasing the efficiency and focus of collaborative efforts without the binding terms that seem to put NIPF owners on edge. For example, Community Wildfire Protection Plans (CWPPs), established under the Healthy Forest Restoration Act, are tools for involving communities in fire risk mitigation on federal and nonfederal lands. They are funded by states but developed and implemented locally. While CWPP planning and implementation efforts don't always reach beyond wildland-urban interface (WUI) boundaries and engage rural forestland owners, they have brought together many stakeholders and built relationships among community members around the issue of fire risk (Jakes and others 2007).
In California, Fire Safe Councils (that implement CWPPs in that state) have been recognized for their ability to promote innovative fire mitigation activities and build social capital in WUI communities (Everett and Fuller 2011). In Oregon, the nonprofit group Sustainable Northwest is working with landowner associations to expand processing facilities and develop merchandising yards for small-diameter wood, and to promote woody biomass heating systems (Sustainable Northwest 2011). Collaborative institutions such as these create the opportunity for frequent and sustained interaction among landowners having diverse motivations and values, a necessary foundation for building shared cognition, norms of reciprocity, and in cases where public agencies are involved, legitimacy (Bodin and others 2006).
Other cooperative models that could involve NIPF owners include The Nature Conservancy's Fire Learning Network, and the U.S. Forest Service's Collaborative Forest Landscape Restoration Program (CFLRP). Fire Learning Networks are regional groups that bring together public agencies, tribes, and municipal governments (though not specifically private forest owners) to plan and coordinate fuel reduction and forest restoration activities across ownerships. The CFLRP provides funding to local collaborative groups to plan science-based, economically viable fuel reduction and ecological restoration activities on select national forest lands. Although focused on federal lands, these efforts may be attractive to private forest owners if they help reduce the costs of, or create returns on, treatments on other ownerships, or decrease the legal risks associated with treatments through Memorandums of Understanding and formal partnerships. Future research could explore such models and the opportunities they offer for collective action for landscape-scale ecosystem management across ownership boundaries.
---
Conclusion
In articulating his vision for America's forests, U.S. Secretary of Agriculture Tom Vilsack has emphasized an ''all lands approach'' to forest restoration that calls for collaboration in undertaking landscape-scale restoration activities. Cooperation across ownership boundaries in fire prone, mixed-ownership forest landscapes is desirable yet challenging. Most of the NIPF landowners interviewed and surveyed for this study were concerned about fire risk on their lands and hazardous fuel conditions on the properties around them (and on public lands in particular), and treated fuel on their properties to reduce this risk. Although NIPF owners indicated a substantial willingness to cooperate with others on fuel reduction activities in the future, their past behavior demonstrated limited cooperation. Perceived risk of fire occurring on one's property, and from nearby public forestlands were predictors of cooperation in fuel reduction with public land management agencies. Risk perception was not associated with cooperation among private landowners. The availability of funding and technical assistance from public agencies to help support fuel reduction on private lands, the greater social barriers to private-private cooperation than to private-public cooperation, and perceptions of more hazardous forest conditions on public lands relative to private lands may explain this difference.
Interview data suggest that social values and norms about property ownership work against cooperation, especially among NIPF owners, even when they perceive a risk of fire to their properties. Nevertheless, cooperation does occur among private owners in arenas other than fuel reductionand it may occur indirectly through third parties, such as private contractors. Furthermore, owners say they are willing to cooperate with one another in the future. Thus, given the benefits of cooperation for landscape-scale natural resource management, new institutional models of cooperation to manage landscape-scale fire risk may hold promise.
From a policy standpoint, building a common understanding of fire risk among landowners, including fire risk on lands beyond their own property boundaries, may increase the likelihood that landowners will cooperate with others to reduce hazardous fuel. Promoting this awareness among landowners who reside on their properties may be particularly effective given the positive association between residing on one's parcel and cooperation. Nevertheless, in the absence of policies and institutions that improve the balance between the costs of cooperation and the benefits of protecting one's property from fire, cooperative landscape-scale management of natural hazards across ownership boundaries will be limited. |
Background: The health and nutritional situation of adults from three rural vulnerable Amazonian populations are investigated in relation to the Social Determinants of Health (SDH) and the epidemiologic transition. Aim: To investigate the role of the environment and the SDH on the occurrence of chronicdegenerative diseases in these groups. Subjects and Methods: Anthropometric, blood pressure, and demographic data were collected in adults from the RDS Mamirauá, AM (n=149), Flona Caxiuanã, PA (n=146), and quilombolas, PA (n=351), populations living in a variety of socio-ecological environments in the Brazilian Amazon. Results: Adjusting for the effect of age, quilombola men are taller (F=9.85; p<0.001), and quilombola women present with higher adiposity (F=20.43; p<0.001) and are more overweight/obese. Men from Mamirauá present higher adiposity (F=9.58; p<0.001). Mamirauá women are taller (F=5.55; p<0.01) and have higher values of waist circumference and subscapular/triceps index. Quilombolas present higher prevalence of hypertension in both sexes, and there are significant differences in rates of hypertension among the women (X 2 =17.45; p<0.01). The quilombolas are more dependent on government programs, people from Mamirauá have more economic resources, and the group from Caxiunã have the lowest SES.In these populations, the SDH play a key role in the ontogeny of diseases, and the "diseases of modernity" occur simultaneously with the always present infectoparasitic pathologies, substantially increasing social vulnerability. | Obesity, Hypertension, Social Determinants of Health, and the Epidemiologic Transition among Traditional Amazonian Populations
The Amazon is one of the last ecological frontiers of the planet. In recent decades it has been the focus of intense social, economic and environmental changes which have led to important epidemiologic implications for the local populations (Piperata and Dufour, 2007;Piperata et al., 2011;Melo and Silva, 2015;Silva 2004aSilva ,b, 2011)).
Studies about nutrition and health of non-indigenous traditional populations of the Brazilian Amazon such as caboclo/ribeirinhos and quilombolas are still limited. Only recently, due to new governmental policies, more attention has been given to social, economic, territorial and health aspects of these groups (Brasil, 2007a,b,c). However, because of logistic difficulties, and high costs involved with investigations of smaller and more geographically isolated populations, research reporting on their health and nutrition situation continues to be a challenge. In this article data related to the evaluation of adult health, nutritional status and blood pressure, for three different rural groups representing an important part of Amazonian social diversity, are presented. These groups are considered vulnerable due to their ethnic and socio-ecological conditions (Adams, 2002;Adams, et al. 2006;Brasil, 2007a;Freitas et al., 2011;Gomes et al., 2013;Lima and Pereira, 2007;Silva, 2006), and here their situation is analysed from a Social Determinants of Health (SDH) perspective (CSDH, 2005;CNDSS, 2008;Marmot, 2001;Rose, 1985).
According to Rose (1985) it is necessary to look at the "causes of causes" of disease, that is, to go beyond the disease of the individual to the reasons why people become sick.
When this is done it becomes clear that the primary determinants of diseases, in any population, are social and economic rather than simply biologic. Even though genetic factors may have a strong influence on individual susceptibility, genetics alone has little explanatory power over population differences in incidence of diseases (Rose, 1985). Marmot (2001), argues that many causes of diseases are social and political, and looking only at differences between individuals often misses the point that major differences in the incidence of maladies occur between populations. Considering the impact of the social factors on disease occurrence, in March 2005, the World Health Organization created the Commission on Social Determinants of Health (CSDH), with the objective of making the world aware of the importance of social determinants in the health situation of individuals and populations, and the need to combat inequities in health created by social disparities (CNDSS, 2008).
According to the WHO, SDH are "the conditions in which people are born, grow, work, live, and age, and the wider set of forces and systems shaping the conditions of daily life. These forces and systems include economic policies and systems, development agendas, social norms, social policies and political systems" (http://www.who.int/social_determinants/en/). Throughout this paper we will attempt to show how different environments and socioeconomic settings impact the health of traditional Amazonian populations, in order to call attention to the need for the implementation of public policies aimed specifically at these groups.
---
Research Location and Populations
Data comes from research projects developed between 2008 and 2014 in the Brazilian Amazon basin designed to provide subsidies for debates about the health of rural populations and public policies. Populations with different historic origins and socio-ecological settings are evaluated to compare how their lifestyles and body habitus are influenced by the region's Social Determinants of Health (SDH). Data collection was accomplished in areas that represent a large extent of the environmental diversity found in the Brazilian Amazon. Morán (1993) presents detailed description and analysis of the Amazonian ecosystems, and Dufour et al. (2016), in this issue, provide a general synthesis of the Amazon basin and its main geographic and ecological features, for this reason we will describe only the specific populations and ecosystems of interest to this research.
The Mamirauá Sustainable Development Reservation (RDSM) is located in the Municipal district of Tefé, Amazonas State (Figure 1). It was the first conservation unit of sustainable use implemented in Brazil (1990) which included the idea of environmental protection and shared administration of natural resources between users and the government (Queiroz, 2005;Moura, 2007).
According to Moura (2007):
"Mamirauá Sustainable Development Reservation (RDSM) has an area of 1,124,000 hectares, located in the confluence of the Solimões and Japurá rivers, and next to Amanã Sustainable Development Reservation (RDSA), in the Medium Solimões area, Amazonas State.
It is recognized by the international conservationist organizations as the largest floodplain protection reservation of the world" (pg. 28).
According to J. M. Ayres (1954Ayres ( -2003)), biologist and creator of the proposal for the sustainable development reservation, the RDSM was created to reconcile the traditional mode of occupation of the Amazonian floodplain (Várzea) with the environmental conservation practices and possibilities of providing better living conditions to local populations.
A recent census counted a total of 492 houses in the RDSM (IDSM, 2013). The population is divided into small communities with sometimes 4-5, and up to 30-40 houses, usually scattered along the margins of the main rivers of the region. As in other riverine areas, the exact number of communities is difficult to specify because they split frequently and new ones are created while the old ones are abandoned for several reasons such as religious differences among residents, family fights, and environmental circumstances such as insect infestations, changes in the floodplain geomorphology, or shifts in river and lake courses (Moura, 2007).
The RDSM is located in a region characterised by extended periods of alternation between floods and dryness, and it is an extremely diverse environment in terms of biodiversity. The annual floods bring giant amounts of sediments from the Andes which create a rich environment responsible for the high biomass productivity of the Amazonian floodplains. The alternation of wet and dry periods defines the geomorphology of the area, the abundance and endemicity of flora and fauna, and even the patterns of human occupation (Queiroz, 2005;Moura et al., 2016). Human and animal activities are driven by the rhythm of the waters and the seasonal variations. The alternation of periods determines access to resources and transit in the Reservation. During the raining/flooding period there is more abundance of fish, and the duration of transportation between different locations and in the direction of the urban centres is reduced. In the dry period everything is more difficult, from access to clean water and food to the movement between houses and the cities (Moura, 2007).
The current occupation of the Mamirauá region began in the 19th century by people migrating from the northeast of Brazil during the Rubber Boom. The migrants integrated with local native populations and became today´s caboclos or ribeirinhos (Lyma-Ayres, 1992;Queiroz, 2005). The term caboclo has many meanings and connotations (see Lima-Ayres, 1992;Silva, 2001;Rodrigues, 2006). In this paper we adopt the concept presented in Silva and Eckhardt (1994) where caboclo are tri-hybrid populations with European, African, and Amerindian ancestry living mainly in the rural areas of the Brazilian Amazon.
The participant samples from the RDSM include 76 men and 73 women; all adults (≥ 18 years) and residents of 78% of the homes of eight communities representative of the socio-environmental diversity of that conservation unit. Data were obtained in a study to identify health and ecosystemic indicators of the Amazonian floodplain, involving about 550 residents of 88 houses of those localities (Moura, 2008).
The Caxiuanã riverine/caboclo groups live in and around the Caxiuanã National Forest (FLONA), a protected area of 330,000 hectares covered mainly by upland (terra firme) tropical forests, located in the municipal district of Melgaço, Pará State, about 400 km from Belém, the State's Capital (Figure 1). The FLONA is composed mainly of primary tropical rain forest (85%), flooded forests (12%), secondary vegetation and non-forested areas (3%). This protected area belongs to a black water river system with relatively acidic pH in the Caxiuanã bay, and the daily tides have little influence on water level (MPEG, 1994).
In Caxiuanã, houses are dispersed throughout the FLONA in clusters varying from 2 to 10 homes, but some families live in isolation, in houses that are from 500 metres to 5 or more kilometres away from one another. A total of 148 individuals were investigated (72 men and 76 women), representing about 65% of the adult population resident in the area.
Mamirauá and Caxiuanã exemplify traditional rural populations as they originated from and have lived in Amazonia since the middle of the 19 th Century. They are descendants of the encounter of Amerindians with European settlers, and of Africans brought to Brazil as slaves, but who sometimes escaped from urban centres and farms to distant places in the jungle (Lima-Ayres, 1992;Silva, 2001). They have lifestyles strongly dependent on subsistence activities such as agriculture of manioc (Manihot esculenta) and beans and corn, artisanal fishery for domestic consumption and sale, collection of forest products for consumption and sale in the local towns, and small animal husbandry. They also maintain regular contacts with regional markets and participate in temporary jobs in the ecotourism activities and provide support to scientific research (Filgueiras and Silva, 2013;Lisboa et al., 2013;Moura, 2007Moura, , 2010;;Piperata, 2007;Piperata et al., 2011Piperata et al., , 2013;;Silva, 2001Silva, , 2011;;Silveira et al., 2013). In the last decade, these groups have also benefited from several social programs of the federal government, such as retirements and rural pensions, and Bolsa Família (a federal welfare program); the impact of these programs on health has not yet been fully evaluated (Brasil, 2009(Brasil, , 2010;;Ivanova and Piperata 2010;Moura, 2007;Piperata et al., 2011;2013). The main differences between the two protected areas are related to their ecological and political settings. The first is in a floodplain ecosystem and it was intended that the local communities would be involved in its management, with total access to the natural resources, and the latter is mainly a forest\upland ecosystem, legally a national protected area, where the families are considered intruders and their access to the local resources is formally limited.
Quilombos are groups formed predominantly by African-derived populations originated in Brazil from slave escapees who survived in the Amazon basin and other regions, making use of common systems of land ownership and tenure (Arruti, 2008;Salles, 2005;Treccani, 2006). Although there are no specific genetic studies yet of the groups discussed in this study, in general, quilombolas of Amazonia also present, in varied percentages, biological and cultural influences of Amerindian and European groups (Guerreiro et al., 1994(Guerreiro et al., , 1999;;Santos, et al., 2008). Even though there is great variation among communities, the rural quilombolas are organised in settlements varying from 5, 6 to two dozen or more houses close to each other, usually ordered in a linear way and near to rivers and other water sources. They practice mainly subsistence agriculture, fishing, extraction of natural products, production of handicrafts for sale, and small animal husbandry for survival (Brasil, 2007b;Oliveira, 2011).
In recent years the quilombolas also started to receive the Bolsa Família, which became an important source of cash to many families (Oliveira, 2011;Guimarães and Silva, 2015).
Overall 351 people (154 men and 197 women) from five quilombola communities: Africa, Laranjituba, Santo Antonio, Mangueiras, and Mola, all in the State of Pará, were included in this analysis, encompassing at least 60% of the adults (≥ 18 years of age) in the participant communities (Figure 1). The investigated quilombo residents have subsistence patterns and socioeconomic situations similar to the riverine/caboclo groups, except that they are located predominantly in areas of upland, closer to the largest regional urban centre (Belém), and some of them have better access to basic infrastructure such as proximity to highways, electricity, health centres, telephones and primary schools, although they suffer constant discrimination due to their assumed slave ancestry (Cavalcante, 2011;Pinho et al., 2013).
Quilombolas are included in this study because they encompass a large segment of the rural Amazonian populations and hence increase the diversity and range of the sample investigated, and because politically they are in the same situation of social and environmental vulnerability as the riverine/caboclo groups, being subject to most of the same SDH factors. From a biological point of view, differences among the investigated populations are possibly smaller than the similarities because of the historical origin of the participant quilombolas.
Other information about the ecologic situation, history, geography, social and economic conditions, subsistence and health aspects of the groups and investigated areas are available in previous publications (Borges, 2011;Cavalcante, 2011;Filgueiras and Silva, 2013;Guimarães and Silva, 2015;Lisboa et al., 2013;Melo and Silva, 2015;Moura, 2007Moura, , 2008;;Moura et al., 2016;Pinho et al., 2013;Piperata and Dufour, 2007;Piperata et al., 2011Piperata et al., , 2013;;Silva, 2002Silva, , 2009Silva, , 2011;;Silva and Padez, 2010;Silva et al., 2006;Silveira et al., 2013).
---
Methods
All the projects were approved by the institutional Committee of Ethics in Research and the communities involved. All participants signed a research consent form following the Resolutions CNS/Brazil 196/96 and 466/12 (Brasil, 2012).
In all groups investigated, sampling strategy involved a first contact with the communities to explain the research objectives, obtain their group approval for participation and conduct a first population survey. This was followed by one or more field trips where individual consent was obtained, and personal (including health and anthropometric), family and household information was collected at each home of the locality or, according to the desire of the community and the time frame for data collection, at a central place, such as a community health centre or school, to where the families converged at a certain date and time previously defined. This research design, adapted from Silva (2001) and Moura (2007), made it possible to guarantee a high rate of participation of adults and children and men and women, representative of the overall population of each study area.
The anthropometric measures were taken following procedures described by Weiner and Lourie (1981) and SISVAN (2008). The anthropometric measurements were done by the same individuals to reduce inter-observer error.
The anthropometric variables analysed include height, weight, arm circumference, waist and hip circumferences, and triceps, subscapular and subprailiac skinfolds. Body Mass Index (BMI) was generated from the weights and heights (WHO, 2011;Deurenberg et al., 1990. Martínez et al., 1993). Circumference measures were taken with a fabric anthropometric tape, following protocols of the World Health Organization (2000). Skinfold measures were made with a Cescorf caliper, according to Frisancho (1999). The parameters adopted are described in Table 1. The percentage of general adiposity and the amount of fat free mass were calculated from the skinfolds according to Durnim and Womersley (1974).
The anthropometric measures were compared among sexes through an analysis of variance (oneway ANOVA) and the differences considered statistically significant at p<0.05.
To analyse differences among populations a covariance analysis (ANCOVA) was performed in which the age effect was adjusted. Statistical analyses were performed using the SPSS® version 17.0.
General health evaluation was made through a clinical exam accomplished by a physician. Blood pressure was checked in the brachial artery on the left side, using a certified aneroid sphygmomanometer following procedures recommended by the Brazilian Health Ministry (Brasil, 2006) which follows the WHO parameters.
The parameter values for blood pressure assessment according to the Brazilian Health Ministry (Brasil, 2006) are presented in Table 2. : Brasil, 2006. *SAH -Systemic Arterial Hypertension Systemic Arterial Hypertension (SAH) is characterised as "systolic blood pressure higher or equal to 140 mmHg and diastolic blood pressure higher or equal to 90 mmHg in individuals not making use of anti-hypertensive medication" (Brasil, 2006, p.14). Individuals with elevated blood pressure, between 120-139 mmHg of systolic and 80-89 mmHg of diastolic, tend to maintain pressure above the population average and they are potentially at higher risk of developing SAH and associated cardiovascular events being considered in a stage of "pre-hypertension" (Brasil, 2006).
Information about the situation of environmental risks, labour activities, subsistence strategies, life and housing conditions and geographic/land/social conflicts were also obtained through participant observation and interviews as part of the SDH assessment.
---
Results
The studied communities have a diverse set of economic and subsistence activities, from agriculture for domestic consumption and sale, artisanal fishery, small animal husbandry, forest management, handicraft manufacture and ecotourism, to formal work as teachers, health agents and municipal technicians. This range of activities is similar to other Amazonian rural populations (Brasil, 2007b(Brasil, , 2008;;Guerrero, 2010;Lima-Ayres, 1992;Moura, 2007Moura, , 2010;;Murrieta, 1994;Murrieta and Dufour, 2004;Nugent, 1993;Piperata and Dufour, 2007). In the last decade an important cash contribution from income distribution programs of the federal government (Bolsa Família) has been given to riverine and quilombola families. Together with rural retirement pensions and temporary contracts of work, these have increased their access to consumer goods and affected their diet (Ivanova and Piperata, 2010;Moura, 2007;Pinho et al., 2013;Piperata et al., 2011Piperata et al., , 2013;;Silva, 2011).
The investigated communities live in different ecological environments and, due to the historical combination of the main groups that contributed biologically and culturally to the formation of the current Amazonian population (Amerindians, Europeans and Africans), they represent a significant portion of the regional biocultural diversity for which information about health, nutrition, and the SDH is still very limited.
All communities present precarious conditions of environmental sanitation, lacking basic sanitary infrastructure and piped water, and have difficult housing situations, with most buildings made of wood, with a small number of rooms, and many of them without an internal water closet, which relates directly to high intestinal parasite loads and other infectious and deficiency diseases found among these groups (Giatti et al., 2007;Lisboa et al., 2013;Moura, 2007Moura, , 2008;;Pinho et al., 2013;Silva, 2001;2009). Social and economic activities developed in the rural areas of Amazonia, mainly in the floodplain, are strongly marked by seasonality (flood and reduction of the water levels), which directly influences the life rhythm and affects access to health and education, increasing the difficulty of reaching the health centres and schools during some periods of the year (Filgueiras and Silva, 2013;Moura, 2007;Silva, 2001;2006, 2011;Silva et al., 2006).
The sociodemographic situation of these populations is presented in Table 3. Caboclos and quilombolas show similar socioeconomic conditions, particularly in relation to education, income and number of rooms in the house. However, some particularities were noticed such as: more access\dependence on government programs among quilombolas, likely related to a closer political proximity with the urban centres and better social organisation in associations; more consumer goods in Mamirauá, due to the several projects generated by the Mamirauá Institute throughout the years and access to cheaper goods from the "Zona Franca de Manaus" (Manaus Free Trade Zone); and more residents in the houses in Caxiuanã, due to the legislation of the FLONA that limits the expansion of new families. The difference in the frequency of kitchens inside or outside the houses is associated with the habits of the riverine populations who traditionally maintain their "girau" outside the house to facilitate food processing, mainly fish, and the drainage of waste water. The quilombolas prefer the kitchen inside the house, especially if there is a faucet with water pumped from an open well with an electric engine, which indicates higher social status. Among quilombolas, over 80% of the houses have access to electricity while less than 20% of houses among the riverine do. The high number of latrines out of the house in all groups, usually wholes dug directly in the ground, indicates the lack of access to environmental sanitation and potential contamination of the populations and water sources by fecal materials. ** Includes Bolsa Família, retirement pensions, rural pensions and other support provided by the governments in the form of cash. *** Includes items such as motorboat, boat engine, gas stove, radio, TV, parabolic antenna, stereo, DVD player, bicycle, chainsaw, clock, sofa, sewing machine, washing machine, shotgun, mattress, electricity generator.
In Table 4 the median values of BMI, adiposity percentage, amount of fat free mass and the subscapular/triceps index (STI) after adjusting for the age effect are described. In the quilombolas all variables present significant differences among the two sexes, except the upper arm circumference (UAC) and the waist. The three groups present a similar pattern in which men have significantly higher values in height, weight, fat free mass and STI, and women have significantly higher values of skinfolds, adiposity percentage and waist and hip circumferences, except in the population of Caxiuanã where men present a mean value of waist circumference significantly higher than women. Comparing the three groups (F1), quilombola men are significantly taller than men from Caxiuanã and Mamirauá. Mamirauá men have significantly higher adiposity values (skinfolds) and percentage of general adiposity, hip perimeter and amount of fat free mass. In all groups, women present statistically significant differences (F2) in several variables: Mamirauá women present height, waist and hip circumferences and STI superior to the quilombolas and Caxiuanã women. The quilombola women present general adiposity values higher than Caxiuanã and Mamirauá women.
Insert Table 4 Figures 2 and3 present the obesity values (including overweight) according to WHO intervals in men and women of the different age groups. As the study samples are small because Amazonian rural populations' settlements characteristically have a small total size, we evaluated overweight together with obesity as both are correlated with the elevation of morbidity and mortality rates (Hu, 2008;SISVAN, 2008). In men, in all age groups, the Mamirauá population presents higher values than the quilombola and Caxiuanã. Caxiuanã men present lower values than quilombolas and Mamirauá males, except in the 60-75 year-old age group (Figure 2). In women, the quilombola sample presents higher values than Caxiuanã and Mamirauá in all age groups, except the 18-29 and 50-59 years (Figure 3). In all age groups, Caxiuanã women present less overweight/obesity than quilombola and Mamirauá females.
Insert Figures 2 and3 The frequency of blood pressure status in the studied populations is shown in Table 5.
SAH is more frequent among the quilombola men and women; Caxiuanã men have more systolic and diastolic pre-hypertension than the other men, and Mamirauá women have a higher frequency of systolic pre-hypertension than everyone even though this group presents the overall lower frequency of SAH.
---
Discussion
Obesity, arterial hypertension and type 2 diabetes are among the chronic-degenerative diseases of larger demographic, economic and social impact nowadays (Hu, 2008;SISVAN, 2008;SBEM, 2010;SBH, 2010). Several studies have shown that the prevalence of obesity, hypertension and diseases associated with them varies among populations depending on their degree of contact with the Western culture, socio-ecological situation, impact of the market economy and government policies on their diet and lifestyle and, perhaps, their biological ancestry (Blanes, 2008;Dressler, 1999;Liebert et al., 2013;Silva, 2001Silva, , 2011)). According to the Brazilian Commission on the Social Determinants of Health, SDH are the social, economic, cultural, racial/ethnic, psychological and behavioural factors that influence the occurrence of health problems and the risk factors in a population (CNDSS, 2008).
Nutritional and cardiovascular diseases are known to be associated with all these factors, hence, by using a SDH perspective it is possible to look for the causes of the causes of these diseases and propose more adequate public policies to deal with them.
In Brazil, most of the epidemiologic studies of non-transmissible chronic diseases have been concentrated in urban areas and in the South and Southeastern regions. There are only a relatively limited number of studies in the North, and those accomplished among rural populations are few (Adams, 2002;Alencar et al., 1999;Borges, 2011;Borges and Silva, 2010;Giugliano et al., 1981;Melo and Silva, 2015;Pinho et al.;2013;Silva, 2004bSilva, , 2006Silva, , 2009Silva, , 2011)). More studies are still needed to understand the situation and distribution patterns of the "diseases of modernity" (hypertension, type 2 diabetes, obesity, metabolic syndrome, among others), and to identify the biological, environmental and social factors that determine the risk dynamics in these populations.
Several investigations have shown that there is a relationship between infant malnutrition and overweight\obesity and their associated diseases in adult life (Bogin, 2010;Hu, 2008;Popkin, 2003). Recent long term studies among native Americans such as the Shuar and the Tsimane' have also shown that the impacts of socio-economic changes on health in traditional populations can be fast and dramatic, although varied in level according to a number of factors (Rosinger et al., 2013;Liebert et al., 2013;Urlacher et al., 2016 in this volume). Research among caboclo and quilombola populations have already demonstrated high percentages of infant malnutrition and the epidemiologic transition in these groups (Brasil, 2007b;Lahr, 1994;Oliveira, 2011;Silva, 2001Silva, , 2009;;Silva and Guimarães, 2015).
As has been established in other developing countries, the results presented here highlight that more than any single biological factor, there is a direct relationship between the situation of socio-ecological vulnerability and the populations' health in relation to both infectiousparasitic as well as chronic non-transmissible diseases.
Both overweight and obesity prevalence in the studied populations is high, especially in men from Mamirauá and in quilombola women, while low weight (BMI <18.5 Kg/m 2 ) is not significant in either men or women. Compared to other Brazilian populations (Brasil, 2009;Blanes, 2008;CNDSS, 2008;IBGE, 2010), men from Mamirauá present very high frequencies of overweight/obesity (51.3%), only lower than the Brazilian urban population (53.5%), while men from Caxiuanã present the lowest values (13.3%). On the other hand, quilombola women present values (53.4%) as high as the general population of women of the country (53.1%), including from urban (53.1%) and rural (53.4%) areas separately. Women from Caxiuanã present lower values (26.3%) compared to other Brazilian groups.
From a cultural point of view, among Amazonian rural populations who are used to food insecurity and all types of infrastructure deficiencies, there is a widespread perception that fat is healthy and that being a "chubby" child or adult is an indicator of good life quality.
According to Hu (2008) the different perceptions among populations of the meaning of being "fat" (the African-Americans in the USA, or the Samoans, for instance), creates in some groups a higher social tolerance to people with overweight and obesity as these are not seen as potentially sick. In the investigated groups, overweight is not considered a risk for diseases but a sign of health and that the family has financial resources and high social status, demonstrating the need to understand the socio-cultural dynamics when investigating the epidemiologic situation of these populations.
Besides eating, physical activity is one of the decisive factors in weight maintenance and gain (Hu, 2008). The differential patterns of physical activity and eating habits between men and women traditionally present in the rural populations, the current reduction in women's physical activity as a function of a smaller number of children, the acquisition of industrialised and frozen foods that require little processing for preparation, and consumer goods such as gas stove, television, DVD player and washing machines, may have an important role in the differences observed in the frequency of obesity\overweight and hypertension among the investigated groups. As gender roles, work, and daily activities are important SDH, more detailed research at the home level is necessary to elucidate the impact of the new consumption patterns on the health of the Amazonian rural populations, particularly the women.
In relation to blood pressure, the prevalence of SAH is higher in the quilombola population than in Caxiuanã and Mamirauá (Table 5). The overall pre-hypertension prevalence and SAH are correlated with the overweight/obesity patterns and are of particular concern among the women, who are an especially vulnerable segment of the rural populations (Brasil, 2009;Borges and Silva, 2010;Borges, 2011;Paixão and Carvalho, 2008;Silva, 2001). Although the prevalence of SAH is not above that observed in other Brazilian rural and urban populations (Brasil, 2006;Silva et al, 2006), the values show that hypertension is already a public health problem among these Amazonian rural populations.
There are still few investigations of SAH prevalence in the non-indigenous inhabitants of the rural areas of Northern Brazil and the direct comparison with other studies is difficult, as different works usually use populations with higher age groups (> 19 years old at least), while for this study individuals from 18 years old were considered. In a general analysis, it is possible to notice that although the overall prevalence observed here is not above what has been reported elsewhere, when the high pre-hypertension and the isolated systolic or diastolic hypertension prevalence in the three groups are also taken into consideration, added to their overweight/obesity situation and the socio-ecological precariousness in which they live, a complex picture arises which combines epidemiologic and nutritional transitions, reflecting the importance of the social determinants in the rural populations' health, and requiring immediate action to avoid an SAH epidemic and its accompanying chronic manifestations.
Generally, groups exposed to greater influence of the Western culture and those more involved with market economy present higher obesity and SAH levels, and a stronger association between blood pressure and chronological age (Dressler, 1999;Hu, 2008;Rosinger et al., 2013;Silva et al., 1995). Although these effects have been observed independently of the place the populations inhabit, the association patterns and environmental factors that contribute to the elevation of blood pressure, and the obesity levels as well, with age and ancestry have been shown to be highly variable as a consequence of the economic, ecologic, historic-cultural and biological factors of each population (Dressler, 1999;Wirsing, 1985), characterising a strong relation among the socio-ecological situationthe Social and Environmental Determinants (Blanes, 2008, SBEM, 2010;SCDH, 2005) -and the health/illness of the investigated populations.
Among the riverine/caboclo and the quilombola, the difficulties related to access to potable water, environmental sanitation, and health services, although they have improved in some areas, especially Mamirauá, in the last years (Moura, 2007(Moura, , 2008;;Moura et al., 2016), are still a matter of concern as they are involved with the origin of many diseases such as diarrhoeas, anaemia, and infant malnutrition and death, which are among the main morbidity factors related to the SDH in the Amazon region (Brasil, 2008(Brasil, , 2009;;CNDSS, 2008;Lahr, 1994;Moura, 2008;Piperata et al., 2013;Silva, 2009).
Although Brazil has gone through several economic and social turbulences in the last 50 years, an accelerated process of nutritional transition is underway, which has increased overweight/obesity prevalence (and also SAH), mainly among women, while still maintaining high, although falling, malnutrition prevalence, mainly infantile (Brasil, 2009(Brasil, , 2010;;CNDSS, 2008;Ivanova and Piperata, 2010;Piperata et al., 2011). This puts rural Amazonian populations, such as the riverine and quilombolas, in the vulnerable situation of having a double burden of disease, characterising the epidemiologic transition taking place in the country, and particularly in Amazonia (CNDSS, 2008;Monteiro et al., 2010;Oliveira, 2010;Silva, 2006). In the States of the North, circulatory system diseases are currently among the main causes of death in adults, while neonatal and infant mortality continues to be among the highest in the country (Brasil, 2010;CNDSS, 2008). On the other hand, the amount of disease and death underreporting and registered as causes poorly defined make the existing statistics difficult to believe, possibly underestimating the real health situation of the region (Lisboa et al., 2013;Silva, 2006).
The population of Amazonia had the lowest Gini index in the country in 2013 (0.478) (IBGE, 2015), and the second smallest per capita income of the nation in that year (IBGE, 2015). The groups investigated here reflect that index. As in other areas of Brazil, poverty, the precariousness of environmental sanitation and of other basic infrastructure conditions, illiteracy, unemployment, and racism/discrimination affect mainly the self-declared "pardo and negro" (brown and black) and the poorer rural segments of the population (CNDSS, 2008;Paixão and Carvalho, 2008;Pinho et al., 2013), where the quilombolas and the riverine/caboclo can be included, further characterising their socio-ecological vulnerability.
Studies indicate that in Northern Brazil obesity affects the poorest and lower educated population mostly, and mainly women; in some rural populations SAH prevalence is higher among them as well, and there is higher mortality among black and brown women due to circulatory diseases (CNDSS, 2008;Oliveira, 2011;Silva et al., 2006). However, as there is limited data on SAH or obesity prevalence in rural populations, their true impact on the several vulnerable groups of Amazonia are still ignored. The investigated populations fit in all the social and environmental vulnerability descriptors, making it clear that they are especially vulnerable to the SDH and that specific public policies ought to be implemented urgently to improve their quality of life and health.
---
Conclusions
There are still few studies about the human biology of riverine/caboclo and quilombola populations. This group of investigations is pioneering in the simultaneous interdisciplinary study of the morbidity situation for chronic diseases and the SDH of these combined populations.
It was identified that, overall, the precarious socio-ecological situation in which the studied populations live exposes them to a double burden of disease. The Caxiuanã population, more isolated physically, with less access to financial resources and more precarious infrastructure, present the shortest and thinnest individuals, and intermediate pre and SAH levels compared to quilombolas and Mamirauá. Quilombola men are taller and the women present higher overweight\obesity prevalence; both men and women have higher pre and SAH prevalence among the three populations. Mamirauá women are the tallest, the men have higher overweight\obesity and there is smaller pre and SAH prevalence in general.
The differences observed among the groups can be attributed to factors such as psycho-social stress (racism/discrimination); cultural behavioural patterns; more access to cash and the proximity to urban centres found among the quilombolas; the intense work of the Mamirauá Institute for Sustainable Development to improve infra-structure, the epidemiologic, and the income situation of the resident families in Mamirauá; and to the particularly precarious conditions of survival, sanitation, health in general, and almost total absence of the State in Caxiuanã. Overall, there is a strong connection between what has been defined as the SDH and the epidemiologic situation of these groups.
Further studies in these and other populations using an SDH framework will contribute to the proposition of future measures seeking to reduce the double burden of disease associated with the epidemiologic transition, and prevent, among the Amazonian rural populations, the high mortality rates due to cardiovascular disorders observed in the urban areas.
In the development of our projects, dialogue with the communities, the local health and education professionals, and the researchers has been prioritised. This is to promote knowledge exchange and local empowerment as the riverine and quilombolas have been historically kept out of national public policies. These research endeavours also motivated discussion with community and municipal health managers about their health knowledge and needs, contributing to public policy planning aimed specifically to them (Silva, 2015). We believe the information presented here can also be of use to policy planners elsewhere throughout the Amazon basin, where some of the world's most vulnerable rural populations survive in different countries and are exposed to similar problems. |
Only about 85% of men who have sex with men (MSM) with HIV have been tested for and diagnosed with HIV. Racial/ethnic disparities in HIV risk and HIV care outcomes exist within MSM. We examined racial/ethnic disparities in delayed HIV diagnosis MSM. Males aged ≥13 reported to the Florida Enhanced HIV/AIDS Reporting System 2000-2014 with a reported HIV transmission mode of MSM were analyzed. We defined delayed HIV diagnosis as an AIDS diagnosis within three months of the HIV diagnosis. Multilevel logistic regressions were used to estimate adjusted odds ratios (aOR). Of 39,301 MSM, 27% were diagnosed late. After controlling for individual factors, neighborhood socioeconomic status, and rural-urban residence, non-Latino Black MSM had higher odds of delayed diagnosis compared with non-Latino White MSM (aOR 1.15, 95% confidence interval [CI] 1.08-1.23). Foreign birth compared with US birth was a risk factor for Black MSM (aOR 1.27, 95% CI 1.12-1.44), but a protective factor for White MSM (aOR 0.77, 95% CI 0.68-0.87). Rural residence was a risk for Black MSM (aOR 1.79, 95% CI 1.36-2.35) and Latino MSM (aOR 1.87, 95% CI 1.24-2.84), but not for White MSM (aOR 1.26, 95% CI 0.99-1.60). HIV testing barriers particularly affect non-Latino Black MSM. Social and/or structural barriers to testing in rural communities may be significantly contributing to delayed HIV diagnosis among minority MSM. | INTRODUCTION
The estimated number of new human immunodeficiency virus (HIV) infections among men who have sex with men (MSM) in the United States (US) in 2010 was 29,800 (Centers for Disease Control and Prevention [CDC], 2015). Black MSM accounted for the largest proportion of infections (38%) (CDC, 2016). Although the number of new HIV diagnoses among Black MSM increased 22% between 2005 and 2014, the upward trend appears to be slowing in recent years, increasing less than 1% between 2010 and 2014 (CDC, 2016). While estimating the rate of HIV among MSM has proven difficult, one study in New York City estimated that the case rate per 100,000 among non-Latino Black MSM was 8,781 between 2005-2008, compared with 3,221 among Latino MSM, and 1,241 among non-Latino White MSM (Pathela et al., 2011).
Unfortunately, of the estimated 647,700 MSM with HIV in the US at the end of 2011, only about 85% had been tested for and diagnosed with HIV (CDC, 2014a). This suggests that continued efforts to diagnose MSM with HIV promptly are needed to curb the HIV epidemic in this group. The general population of MSM with HIV experience better outcomes along the HIV care continuum when compared with other risk groups (CDC, 2014a).However, Black MSM with HIV experience the lowest rates of linkage to HIV care, retention in care, prescription of antiretroviral therapy (ART), and viral suppression compared with MSM from all other racial/ethnic groups (CDC, 2014b), and compared with their male heterosexual counterparts (CDC, 2014c).
While predictors such as Black race, homelessness, MSM disclosure (Nelson, et al., 2010), life stressors (Nelson, et al., 2014), and MSM-related stigma (Glick & Golden, 2010) have been associated with HIV testing and delayed diagnosis among MSM, less is known about predictors of delayed diagnosis among specific racial/ethnic groups of MSM, particularly those factors at the neighborhood level. Therefore, the objectives of this study were to (a) examine racial/ethnic disparities in delayed HIV diagnosis among MSM, and (b) identify specific individual-and neighborhood-level determinants of delayed HIV diagnosis for each MSM racial/ethnic group in Florida.
---
METHODS
---
Datasets
De-identified HIV surveillance records were obtained from the Florida Department of Health enhanced HIV/AIDS reporting system (eHARS). Cases age ≥13 who met the CDC HIV case definition (CDC, 2008) during the years 2000-2014 and had a reported HIV transmission mode of MSM were analyzed. Cases with missing or invalid data for ZIP code at time of HIV diagnosis and missing month and year of HIV diagnosis, and cases diagnosed in a correctional facility, were excluded. Cases diagnosed in a correctional facility were excluded because they are not representative of the HIV population in the neighborhood where the facility is located, and because they have different access to care than the general population with HIV infection. The 2009-2013 American Community Survey (ACS) was used to obtain neighborhood-level data using ZIP code tabulation areas (ZCTAs) (ACS, 2015). ZCTAs are used by the US Census Bureau to tabulate summary statistics and approximate US postal service ZIP codes (US Census Bureau, n.d.).
---
Individual-level variables
The following individual-level data were extracted from eHARS: ethnicity, race, HIV diagnosis year, sex at birth, age at HIV diagnosis, HIV transmission mode, birth country, HIV-to-AIDS interval in months (if case progressed to AIDS), residential ZIP code at time of HIV diagnosis, and whether the case was diagnosed at a correctional facility. Data on mode of HIV transmission were self-reported during HIV testing, reported by a health care provider, or extracted from medical chart reviews. Cases were coded as US-born if they were born in any of the 50 states, District of Columbia, Puerto Rico, or any US dependent territory. Delayed HIV diagnosis was defined as an AIDS diagnosis within 3 months of HIV diagnosis (CDC, 2013).
---
Neighborhood-level variables
Thirteen neighborhood-level socioeconomic status (SES) indicators were extracted from the ACS to develop an SES index of Florida neighborhoods (ZCTAs) (Niyonsenga et al., 2013): percent of households without access to a car, percent of households with ≥1 person per room, percent of population living below the poverty line, percent of owner-occupied homes worth ≥$300,000, median household income in 2013 percent of households with annual income <$15,000, percent of households with annual income ≥$150,000, income disparity (derived from percent of households with annual income <$10,000 and percent of households with annual income ≥$50,000), percent of population age ≥25 with less than a 12 th grade education, percent of population age ≥25 with a graduate professional degree, percent of households living in rented housing, percent of population age ≥16 who were unemployed, and percent of population age ≥16 employed in high working class occupation (ACS occupation group: "managerial, business, science, and arts occupations"). Income disparity was calculated as the logarithm of 100 times the percent of households with annual income <$10,000 divided by the percent of households with annual income ≥$50,000 and was used as a proxy for the Gini-coefficient (Niyonsenga et al., 2013;Singh & Siahpush, 2002). All neighborhood-level indicators were coded so that higher scores corresponded with higher SES; they were then standardized (Niyonsenga et al., 2013).
To calculate the SES index, we started by conducting a reliability analysis. The Cronbach's alpha for all 13 indicators was 0.93. We selected 7 indicators based on the correlation of the indicator with the total index (high correlation), and the Cronbach's alpha if the item was deleted (low alpha). The 7 indicators selected were: percent below poverty, median household income, percent of households with annual income <$15,000, percent of households with annual income ≥$150,000, income disparity, percent of population age ≥25 with less than a 12 th grade education, and high-class work. The resulting Cronbach's alpha increased (0.94).
Second, we conducted a principal component analysis with and without varimax rotation, which revealed one factor with an eigenvalue greater than 1 (5.14). This factor accounted for 73.49% of the variance in the indicators. Because all the factor loadings were high (between 0.80 and 0.93), we retained all 7 indicators. Finally, we added the standardized scores for the 7 variables and categorized the scores into quartiles.
To categorize ZCTAs into rural or urban, we used Categorization C of Version 2.0 of the Rural-Urban Commuting Area (RUCA) codes, developed by the University of Washington WWAMI Rural Research Center (WWAMI Rural Health Research Center, n.d.).
---
Statistical analyses
Individual-and neighborhood-level data were merged by matching the ZIP code at time of HIV diagnosis of each case with the ZIP code's corresponding ZCTA. We compared individual-and neighborhood-level characteristics by race/ethnicity. We used the Cochran-Mantel-Haenszel general association statistic for individual-level variables controlling for ZCTA, and the chi-square test for neighborhood-level variables. Multi-level (Level 1: individual; Level 2: neighborhood) logistic regression modeling was used to account for correlation among cases living in the same neighborhood through a random intercept using ZCTA. Crude and adjusted odds ratios and 95% confidence intervals for delayed diagnosis were calculated comparing cases by race/ethnicity. First, we estimated crude odds ratios (Model 1). Then we controlled for individual-level factors (Model 2). Finally, we controlled for individual-and neighborhood-level variables (Model 3). To identify unique predictors of delayed diagnosis for each group, separate models were estimated stratifying by race/ ethnicity adjusting for year of HIV diagnosis, age, US/foreign-born status, injection drug use, socioeconomic status (index of 7 indicators), and rural/urban status. SAS software, version 9.4 (SAS Institute, Cary, NC 2002), was used to conduct analyses. Multivariate models were adjusted for year of HIV diagnosis to control for likely changes in HIV testing behaviors and HIV testing strategies over the 15-year study period. The Florida International University institutional review board approved this study, and the Florida Department of Health designated this study to be non-human subjects research.
---
RESULTS
---
Characteristics of participants
Of 91,867 HIV cases reported in Florida 2000-2014, 42,493 had MSM listed as a mode of HIV transmission. Of these, 1,311 were diagnosed in a correctional facility, 1,785 had missing data on ZIP code at time of HIV diagnosis, and 176 had missing data on month of HIV diagnosis (categories are not mutually exclusive). No cases under the age of 13 reported transmission mode as MSM. Of the remaining 39,301 cases analyzed in this study, 27.3% were diagnosed late (see Table 1). This represented a downward trend that started at 38.4% in 2000 and decreased to 18.5% by 2014.
---
Racial/ethnic disparities in delayed HIV diagnosis
The proportion of cases diagnosed late decreased from 2000-2014 for all racial/ethnic groups (see Figure 1). In crude logistic regression models, Latino MSM had lower odds of delayed diagnosis compared with White MSM (see Table 2). After controlling for individual-level factors, Black MSM had higher odds of delayed diagnosis compared with White MSM, and the protective effect of Latino MSM disappeared. The higher odds for delayed diagnosis among Black MSM remained after controlling for neighborhood-level SES and rural/urban status.
---
Predictors of delayed HIV diagnosis by race/ethnicity
HIV diagnosis during 2000-2009 compared with 2010-2014 and diagnosis at 20 years of age or older compared with 13-19 were predictors of delayed diagnosis for Black, Latino, and White MSM (see Table 3). Among Black MSM, being foreign-born compared with USborn, and living in a rural area compared with an urban area were additionally associated. Among Latino MSM, only residing in a rural area at time of HIV diagnosis was independently associated with delayed HIV diagnosis. Among White MSM, being foreignborn compared with US-born was protective.
---
DISCUSSION
Twenty-seven percent of HIV diagnoses in Florida 2000-2014 with a reported mode of HIV transmission as MSM were diagnosed late. After adjusting for individual-and neighborhood-level factors, Black MSM were at increased odds of delayed diagnosis compared with White MSM. Among Black MSM, being foreign-born and residing in a rural area at the time of HIV diagnosis were risk factors. Rural residence was also a strong predictor of delayed diagnosis for Latino MSM. Neighborhood-level SES was not associated with delayed HIV diagnosis among any racial/ethnic MSM group in Florida.
The proportion of late HIV diagnoses among MSM in Florida for the years 2000-2014 was 27.3% (consistent with national estimates, [CDC, 2013)], decreased from 38.4% in 2000 to 18.5% in 2014. The decline may be partially due to revised recommendation for HIV testing such as the 2006 CDC (Branson, et al., 2006) and 2013 US Preventive Services Task Force (Moyer & US Preventive Services Task Force, 2013) guidelines for opt-out screening of adolescent and adult patients in healthcare settings. While several studies have examined racial/ethnic disparities in delayed HIV diagnosis among the general HIV infected population (Tang, Levy & Hernandez, 2011;Trepka et al., 2014;Yang et al., 2010), few studies have examined these disparities among MSM. One study of MSM diagnosed in 33 US states between 1996-2002 found significant differences in the proportion of Black MSM (23.1%, 95% CI 22.4-23.7) and Latino MSM (23.7%, 95% CI 22.6-24.7) who were diagnosed late compared with White MSM (18.4%, 95% CI 17.9-18.9) (Hall et al., 2007). However, the study included the earlier years of the epidemic and used AIDS diagnosis within 12 months of HIV diagnosis to define delayed diagnosis. In our study, Black MSM had higher odds of delayed diagnosis compared with White MSM after adjusting for individual-level factors. Black MSM tended to be younger, with over 70% diagnosed between the ages of 13 and 39, compared with 47% for White MSM. Differences in age, as well as year of diagnosis and nativity, appear to confound disparities in delayed diagnosis between Black and White MSM. Conversely, the apparent advantage among Latinos when compared with Whites in the crude model appears to be related to differences in individuallevel factors.
It remains unclear why Black MSM are more likely to be diagnosed late with HIV. Previous studies and a meta-analysis suggest that Black MSM have higher rates of HIV testing (Pathela et al., 2011;Millet et al., 2007). However, a population-based study suggested that MSM-related stigma among Blacks (72%) and Black MSM (57%) is high, and higher than among Whites (52%) and White MSM (27%), and that unfavorable attitudes toward MSM are associated with no prior HIV testing (Glick & Golden, 2010). A quantitative study comparing MSM who tested late for HIV with those who did not test late found that being Black and homelessness, disclosing male-male sex to 50% or less of people in social circle, having 1 sexual partner versus more than 1 sexual partner in the past 6 months (Nelson et al., 2010), and experiencing multiple life stressors (Nelson et al., 2014) were associated with delayed HIV testing and diagnosis. Further, Black MSM experience more homelessness (Sullivan et al., 2014) and higher rates of depression than White MSM (Richardson et al., 1997), may be less likely to disclose their MSM status to others (Gates, 2010) and may have or perceive less social support (Stokes, Vanable & McKirnan, 1996).
Over 50% of Black MSM resided in neighborhoods in the lowest quartile of SES, compared with 35% of Latino MSM, and 20% of White MSM in our study. The disparity in delayed diagnosis between Black and White MSM decreased but remained after adjusting for neighborhood SES and rural/urban residence. Our results suggest that a comprehensive index of neighborhood SES and rural/urban status explain a portion of the observed disparities between Black MSM and White MSM and do not account for the disparity that remains after controlling for individual-level factors.
Being foreign-born was associated with delayed HIV diagnosis for Black MSM. Our results are similar to those from a national study of 33 US states that found a higher proportion of delayed HIV diagnosis (AIDS within 12 months of HIV diagnosis) among foreign-born Black MSM (44.1%) compared with US-born Black MSM (36.7%) (Johnson, Hu, & Dean, 2010). Our population of foreign-born Black MSM was primarily born in Haiti (49.1%), Jamaica (16.3%), and the Bahamas (5.6%). In the national study mentioned above by Johnson and colleagues, the proportion of Caribbean-born Blacks diagnosed late was 44.2%, higher than the proportion of African-born Blacks (42.1%). A study of 1,060 Blacks in Massachusetts found that foreign-born Blacks were less likely to report HIV testing compared with US-born Blacks (42% vs. 56%) (Ojikutu et al., 2013). Ojikutu et al. found that HIV-related stigma was higher, and knowledge was lower, among foreign-born Blacks compared with US-born Blacks, particularly among Caribbean-born participants than among sub-Saharan African participants (Ojikutu et al., 2013). They also found that over 50% of foreign-born Blacks reported that their most recent HIV test was part of an immigration requirement. The HIV testing requirement for immigrants was lifted in 2010 and has likely impacted testing patterns among immigrants (Winston & Beckwith, 2011).
After adjusting for individual-level factors and neighborhood SES, rural residence was a predictor of delayed diagnosis among Black and Latino MSM. Forty-one percent of both Black MSM and Latino MSM who resided in rural areas were diagnosed late, compared with 27% and 25% of their urban counterparts. A previous population-based cohort study of Florida HIV cases reported that 35% of Blacks in rural areas were diagnosed late, compared with 29% in urban areas (Trepka et al., 2014). This suggests that not only do Black MSM in rural areas have a higher risk of delayed HIV diagnosis when compared with Black MSM in urban areas, but also when compared with both the rural and urban general HIV infected Black population. It is possible that high levels of HIV-and MSM-related stigma, and higher risk of loss of confidentiality in rural areas compared with urban areas are preventing MSM from routine HIV testing, particularly for racial/ethnic minorities (Preston, et al., 2002). Fear of being the target of a violent crime due to hostility against MSM has been reported in a qualitative study of MSM in rural Wyoming (Williams, Bowen & Horvath, 2005). Of note, rural areas in Wyoming are likely very different and more isolated from larger cities than rural areas in Florida. A study in Europe found that MSM who resided in smaller cities reported higher internalized homonegativity compared to those who resided in larger cities, and that higher homonegativity was associated with decreased likelihood of HIV testing (Berg et al., 2011).
A limitation of this study is related to our definition of late diagnosis. It is possible that some individuals who had AIDS within three months of HIV diagnosis were not diagnosed with AIDS until after three months. However, we believe that the possibility of misclassification is small given that cases with AIDS likely had symptoms that encouraged prompt HIV care seeking behavior. Furthermore, HIV reporting was not mandated in Florida until 1997. It is possible that cases diagnosed prior to 1997 were later reported as new HIV diagnoses, and therefore, mistakenly appear to have a shorter HIV-to-AIDS time interval. Nevertheless, it is worth noting that our rate of delayed diagnosis for MSM was nearly identical to national estimates (CDC, 2013). Additionally, our dataset did not allow us to examine important variables, such as individual-level SES, access to health insurance, and HIV testing patterns and barriers. Finally, the small number of rural cases limited our ability to stratify racial/ ethnic groups by rural/urban status to identify unique predictors of delayed diagnosis in rural areas.
Most cases of late HIV diagnosis can be prevented; it is estimated that only 3.6-13% of infections are due to accelerated disease progression (Sabharwal et al., 2011). Therefore, regular HIV testing, as per the current guidelines, offers an opportunity to diagnose individuals prior to developing AIDS. However, barriers to the implementation of routine testing exist, creating disparities across racial/ethnic and other groups. Our findings warrant future investigations on potential cultural barriers to HIV testing among foreign-born Black MSM, as well as on the contextual differences between rural and urban culture that appear to affect HIV testing among MSM. Strategies, such as using social networks to increase HIV testing, have shown promising results among Black MSM (Fuqua et al., 2012) and may also be effective among foreign-born and rural populations of Black MSM. |
Background: Resident satisfaction is an important aspect of nursing home quality. Despite this, few studies have systematically investigated what aspects of nursing home care are most strongly associated with satisfaction. In Sweden, a large number of processual and structural measures are collected to describe the quality of nursing home care, though the impact of these measures on outcomes including resident satisfaction is poorly understood. Methods: A cross-sectional analysis of data collected in two nationally representative surveys of Swedish eldercare quality using multi-level models to account for geographic differences. Results: Of the factors examined, nursing home size was found to be the most important predictor of resident satisfaction, followed by the amount of exercise and activities offered by the nursing home. Measures of individualized care processes, ownership status, staffing ratios, and staff education levels were also weakly associated with resident satisfaction. Contrary to previous research, we found no clear differences between processual and structural variables in terms of their association with resident satisfaction. Conclusions: The results suggest that of the investigated aspects of nursing home care, the size of the nursing home and the amount activities offered to residents were the strongest predictors of satisfaction. Investigation of the mechanisms behind the higher levels of satisfaction found at smaller nursing homes may be a fruitful avenue for further research. | Background
The increasingly elderly population in many western countries has created an increased demand for high quality medical and social care services. This includes nursing home (NH) care, referring to facilities providing 24-h functional support and care for persons who require assistance with activities of daily living and who often have complex healthcare needs [1]. Achieving quality in NH care is complicated by the fact that care quality is multifaceted, difficult to define and measure, and may be perceived differently by different stakeholders [2]. Regulatory agencies thus often struggle to identify factors most important in achieving high-quality NH care [3].
A particular challenge in regulating quality in NH care is that it is in many regards a 'soft' service in which the individual experiences of the NH residents is an important dimension of quality. While many aspects of quality (e.g, clinical quality and cost effectiveness) must be considered in order to achieve a well-rounded assessment of the care provided at a given nursing home, some scholars have argued that resident satisfaction may be the most appropriate assessment of quality in NH care [4,5]. In health care, investigations of patient satisfaction are abundant [6,7], while studies measuring NH resident satisfaction are less common. This may be due to the suggestion that elderly patients with cognitive weaknesses have difficulty reliably answering surveys [5], though studies have shown that patients in cognitive decline are capable of answering surveys, particularly if they are designed with their needs in mind [8][9][10][11].
Given that the satisfaction of residents is an important dimension of quality in NH care, the question becomes how this is achieved. That is to say, what factors are most important to focus on when seeking to improve the satisfaction of NH residents? The most commonly used analytical framework for understanding how quality is generated in health and social care is Donabedian's structureprocessoutcome model [12,13]. A central distinction in Donabedian's model is that between structural and processual quality factors, which are seen as potential explanatory factors behind quality outcomes. Structural factors refer to the physical attributes of the setting in which care is provided, including the number and qualifications of staff, equipment, and physical facilities [13]. Processual factors denote the manner in which the care services are delivered, e.g. whether care routines follow set guidelines, and the extent to which residents are involved in decisions about their care. Quality outcomes can be measured in many ways, both objectively in the form of health status or subjectively in the form of patient/resident satisfaction [12]. A central unresolved question posed in Donabedian's work is whether structural or processual measures are most important for generating outcome quality, and precisely how these factors interact to produce the desired outcomes.
The literature on medical quality in NH care in terms of, for instance, mortality and adverse event rates, has investigated numerous explanatory factors including staffing, ownership, care routines, and the size of facilities [14][15][16][17]. Such studies are particularly abundant in the United States, where collection of the Minimum Data Set provides a robust basis for performing broad studies of clinical outcomes. There are considerably fewer investigations of the determinants of resident satisfaction. Previous studies have investigated structural factors including staff satisfaction [18], and job commitment [19], with both studies finding positive associations with resident satisfaction. A broader study of the influence of organizational factors found that NH ownership, staffing levels, and the provision of family councils were important predictors of NH resident satisfaction [20]. Others have investigated specific interventions related to processual quality factors such as improved meal time routines [21], "person-centered care" initiatives [22], and social activity programs such as gardening [23]. While generally finding positive effects on resident satisfaction, these interventional studies are narrow, and differ in terms of setting and methodology, making them difficult to compare. Taken together, the prior literature on what factors are associated with resident satisfaction in NHs is largely limited to evaluations of specific interventions, and there are few studies investigating the relative influence of structural and processual factors, particularly in the European context.
In Sweden, several public investigations have pointed to quality deficiencies, and a lack of systematic knowledge about factors leading to improved quality [24,25]. The issue of NH care quality has increased in significance in Swedish public debate as reforms have led to an increasing number of homes contracted out by local governments (municipalities) to private, often for-profit firms. In 2017, one study found that about one fifth of the Swedish NHs were run by for-profit providers [26]. This study, as well as another recent investigation of Danish NHs, found that overall, privately operated homes outperformed public and non-profit homes in terms of process measures, while underperforming in terms of structural measures [26,27]. Neither of these studies investigated resident satisfaction however.
In Sweden, there is good availability of data on various aspects of NH care due to comprehensive data collection efforts by the Swedish National Board of Health and Welfare (NBHW). Annual surveys measuring satisfaction are sent by the NBHW to all NH residents, and surveys assessing processual and structural measures of quality are sent to every NH in Sweden. So far however, the use of these data for research has been limited. One exception is a study by Kajonius and Kazemi [28] which investigated differences in satisfaction among NH residents at the municipal level, finding that processual quality factors such as respect and access to information appeared to be more important for residents than structural factors such as staffing and budget.
In this study, we aim to evaluate which structural and processual measures of quality have the strongest associations with overall NH resident satisfaction. In doing so, we hope to provide policymakers and researchers with a broader picture of the determinants of resident satisfaction as NHs than has previously been available.
---
Methods
---
Setting
In Sweden, all citizens have access to publicly funded NH services at heavily subsidized rates. The eldercare system in Sweden is decentralized, with responsibility for service provision resting with the nation's 290 municipalities. Municipalities are obliged to offer NH care to those determined to have a need for such care based on national criteria. The municipality may provide services themselves, or contract out service provision to private entities [29]. In 2016, there were in total 88,886 individuals [30] living in ca. 2300 NHs in Sweden [31], with 20.5% of residents living in NHs operated by private providers [30]. While marketization reforms have led to an increase in the proportion of privately managed NHs, they remain publicly funded [32]. All NHs, both public and private, are subjected to the same national quality reporting requirements, user safety regulations, and auditing measures [33]. This study includes all NHs in Sweden providing care to individuals over 65 years of age in 2016, excluding facilities offering only short-term care.
---
Data collection
Two nationally representative surveys conducted in 2016, both developed and administered by the NBHW, serve as the primary sources of data. The first survey is a user satisfaction survey (Brukarundersökningen, or user survey) distributed yearly to all individuals over 65 years of age receiving elder care services including NH care. This survey consists of 27 separate items to be rated on a five-point Likert scale, relating to their satisfaction with a variety of aspects of elder care services, as well as their health status. Among those living in NHs the survey had a response rate of 56% in 2016, resulting in a total of 40,371 responses [34].
The second data source is a survey sent directly to all NHs in Sweden by the NBHW, which assesses a number of processual and structural measures of quality. This survey (Enhetsundersökningen, or unit survey) is completed by administrative staff at each NH, and had a response rate of 93% in 2016, resulting in 2153 responses [35]. In addition to quality measures, the unit survey provides data on the type of services provided by the NH (general, dementia and/or assisted living), the number of residents in each home, and whether the NH is operated by a public or private entity. While the NBHW has a long experience of developing and administering surveys, and assessments of loss-to follow-up in the user survey have been performed [36] the psychometric properties of these surveys have not been published in the publicly available literature .
Observations in the two NBHW survey datasets for 2016 were matched based on the NH name and municipality. This involved both an automated matching process, and a subsequent manual review of unmatched records. Municipality-level variables were extracted from the national municipality and county council database Kolada [37] and merged into the dataset.
---
Variables
Variables for analysis were aggregated from the two surveys based on their conceptual meaning and the results of an exploratory factor analysis which may be found in Additional file 1, p 1-7. The extracted variables are detailed below, and a summary of the categorization is available as Additional file 2.
---
Dependent variable
Upon exploratory factor analysis, it was found that questions in the user survey were highly correlated (Cronbach's Alpha = 0.92), and was a poor candidate for approaches based on extraction of distinct latent variables. As such, we chose to extract a single composite measure of satisfaction from the user survey for use as the dependent variable, consisting of questions 5-19, 21-25, and 27. To generate a composite measure for use as the dependent variable, the percent of residents at a nursing home responding positively to a given survey question was normalized by subtracting the average percentage of patients responding positively to that question in the population, and dividing by the standard deviation of the population, resulting in a standardized z-score. Z-scores were then averaged across all included survey items to result in a composite score with equal weights for each question.
---
Independent variables
The NBHW divided the unit survey into 12 conceptual categories. A factor analysis showed that the individual questions generally loaded well onto the categories proposed by the NBHW and it was therefore chosen, with a few exceptions, to retain this categorization as the basis for the independent variables used in the analysis. Based on the Donabedian model, the independent variables were divided into "structural" and "processual" variables.
---
Processual variables
The first seven variables related to different processual factors, such as meal-related routines or physical or social activities.
Questions 1 and 1a in the unit survey related to the ability of residents to participate in "resident councils" where residents regularly meet to voice concerns in the NH. Issues raised during resident councils may for instance include the planning of common activities or menus for the coming weeks. These were aggregated and reported as the variable Participation in resident councils.
Questions 2 and 3 in the unit survey concerned the existence of-, and the residents participation in, the creation of "action plans" concerning the care needs and wishes of the resident. These action plans contain information about how various care activities are to be carried out and should be updated every 6 months. The questions were combined into the variable Individualized action plans.
Questions 4 and 5 addressed the existence of mealrelated routines, and the documentation of meal preferences in the residents' action plans. Such meal routines are to be based on the Five Aspects Meal Model (FAMM) proposed by Gustafsson et al. [38], and should be updated every 24 months. The questions were combined into the variable Meal-related routines and plans.
Questions 6a-c in the survey related to the existence of formal routines for handling resident safety issues such as threats, violence, and addiction. While the NBHW grouped question 7 (routines for cooperation with relatives) into this category, it did not load well onto a common factor and is conceptually quite distinct, and was therefore excluded. The remaining questions were combined into the variable Patient safety routines.
Questions 8 and 8a-b in the unit survey related to facilities for-, and availability of, exercise and social activities. We excluded question 8 (whether the NH residents have access to facilities for physical activity), which had a weak-to-moderate factor loading, so as to interpret this variable as a purely process-related measure. The remaining questions were combined into the variable Availability of exercise and social activity.
Questions 9 and 10 related to the existence of routines for planning care in cooperation with other healthcare providers, and whether resident's involvement was documented. Similarly, questions 11 and 12 related to routines for medication reviews and whether resident participation is documented in the medical record. We reported these as the variables Care coordination routines and Medication review routines, respectively.
---
Structural variables
The structural variables included indicators of staffing, ownership and size. Three factors relating to staffing from the unit survey, including the ratio of nurses per resident (questions 13 and 14), non-nurse staff per resident (questions 15 and 16), and the portion of staff with an "adequate education" for their position (questions 17 & 18) were identified. These are reported as the variables Nurses per resident, Staff per resident, and Staff with adequate education respectively, and weekday and weekend staffing levels were weighted at a 5:2 ratio to represent average daily staffing levels. While staffing ratios are fairly straightforward to calculate, the definition of what constitutes an "adequate education" is more complex. Adequacy is determined by the amount of healthcarerelated training completed by non-nurse staff based on a point scale established by the NBHW [39].
The number of beds available at each NH was reported as Size of nursing home. The NH's ownership status, i.e. whether it was run by a private or a public provider, was reported as the variable Private ownership.
---
Controls
Several variables were included in the analysis to control for population health differences between the NHs included in this study. Self-rated health has been found to be an excellent predictor of clinical outcomes [40,41], and we used questions 1-3 and 20 in the user satisfaction survey, which asked about the residents' physical and mental well-being, to control for health status. The type of facilities (general, dementia and/or assisted living) available at the NH was also controlled for.
It was further deemed necessary to control for demographic factors for which data was only available at the municipal level. This refers to different demographic, economical, and political conditions which may vary significantly between the 290 municipalities. A set of controls were adapted from previous studies [26,42,43] including per capita income levels, population density, age profiles, political control, and expenditures, the details of which may be found in Table 1. Data at the municipality level was collected from the Kolada database [37].
---
Statistical analysis
As the large number of quality measures made available by the NBHW was unsuited to direct inclusion in a regression-modelling framework, an initial exploratory factor analysis was performed to reduce the dimensionality of the dataset as described above. Data from the user satisfaction survey and the unit survey were aggregated at the NH level. We sought to minimize bias in the estimation of the effects of the investigated quality measures by drawing upon the approach to causal modelling first described by Pearl [44], using the assumptions of causal directionality described by the Donabedian model of healthcare quality [12,13]. The Donabedian model asserts that a causal relationship exists between structural and processual aspects of healthcare quality, and we assumed that the satisfaction of NH residents would be confounded by their health status. To control for confounding due to these causal relationships, the effects of processual measures of quality were modeled controlling for resident health and structural measures of quality. We present coefficient estimates for structural measures including controls for other measures of structural quality, though the direction of causality within the selected set of structural measures is in many cases unclear. In addition to these full models, we present additional nested models estimating bivariate associations, and models controlling only for resident health. In this framework, variations in the regression coefficients between the full and nested models allowed for the interpretation of the impact of health status and structural factors on the effect of the quality measures.
The aggregated variables were first analyzed in a classical ordinary least squares regression framework using the Huber-White sandwich estimator to account for heteroscedasticity and clustering as implemented in the rms R package [45]. Hierarchal models including municipality-level controls with random intercepts for municipalities were implemented using a "Partial pooling" approach to account for clustering and confounding due to municipal-level factors [46], as implemented in the lme4 R package [47]. Confidence intervals were generated using basic parametric bootstrap resampling.
In this analysis, we report our results in terms of standardized regression coefficients. While this allows for direct comparison of the importance of each independent variable in predicting resident satisfaction, it makes interpretation in terms of absolute effects cumbersome. Given the low rates of missing data at the unit level, multiple imputation was not deemed to be necessary, and cases with missing values were deleted list-wise in the relevant models. All statistical analyses were performed using R version 3.5.0, and a reproducible accounting of our reported findings is included as Additional file 1. A number of sensitivity analyses investigating the impact of various model specifications, potential biases due to loss to followup, and assumptions made in the main analysis are also included in Additional file 1. Source code and the data necessary to reproduce these findings are available on Mendeley Data [48].
---
Results
Data from both surveys (the user survey and the unit survey) were aggregated at the NH level, resulting in 1921 records in the user survey, and 2189 records in the unit survey. 1711 records could be automatically linked based on municipality and NH names, and an additional 87 records could be matched through manual review, resulting in a dataset containing 1798 NHs. An analysis of non-matched records may be found in Additional file 1. p 7-8. An analysis of the association between survey response rates and the investigated variables was performed. We found a positive association between response rates and resident satisfaction, as well as a negative association between response rates and nursing home size, and an effect indicating that private nursing homes had higher response rates (See dropout analysis in Additional file 1, p 8). Generally, residents of NHs were quite satisfied; in the 2016 survey, 83% answered that they overall were fairly or very satisfied with the care they received.
---
Descriptive data
Descriptive statistics were generated for each of the variables included in the analysis, and are presented in Table 1. We found that the average NH in Sweden has space for 43 residents, a resident to staff ratio of roughly 3.5:1, a resident to nurse ratio of 30:1, and that 83% of nonnurse staff had an adequate level of education as defined by the NBHW criteria. 19% of included NHs were operated by private providers. 80% of NHs offered general care services, while 60% offered dementia care services, and only 5% had assisted living facilities -These sum up to over 100% as a single NH can offer more than one type of service.
With regard to municipality level statistics, we see that about 21% of Swedes are over the age of 65, 4% of whom live in NHs, where the average age of residents is 83. The average annual per-resident cost for the municipality is 838 thousand SEK (around 80 thousand EUR), while average per capita taxable income is 188 thousand SEK (Table 1).
---
Regression analysis
Figure 1 presents the summarized results of each of the models developed to characterize the independent variables created from the unit survey. 1a presents the results using a classical OLS regression framework, while 1b presents the results of hierarchal mixed-effects models controlling for municipal level effects.
In terms of overall predictive value, an OLS model including all covariates achieved an adjusted r 2 of 0.182, while the conditional r 2 value [49] of the multi-level model containing all predictor variables was 0.254. In the multi-level framework, we found that variation between municipalities accounted for 10% of the total variation found between NHs. A total of 12 processual and structural variables were extracted from the unit survey for analysis as independent variables. Upon analyzing the results, variable groupings were identified post hoc based on similarities with regard to effect sizes and conceptual meanings, which are used to simplify the discussion of our findings, and are labelled on the right-hand side of Fig. 1.
The variables in the first group, labelled Individualized care, are all related to the individual care process. They include the variables Participation in resident councils, Individualized care plans, and Meal-related routines and plans. This group had an average effect size of 0.06 in our fully controlled models, and while 95% confidence intervals in the main model consistently excluded zero after adjusting for municipality-level covariates. The significance of the variables in this group varied upon sensitivity analyses however (See Additional file 1, p [22][23][24][25].
The next group, labelled Safe care, includes the variables Patient safety routines, Care coordination routines, and Medication review routines. They are all related to the existence of formal guidelines dealing with various aspects of care. As seen in Fig. 1, none of these variables displayed significant correlations to resident satisfaction.
The final group in the processual category consists of only one variable, Availability of exercise and social activity. This variable, labelled Activity, displayed the highest degree of correlation with overall resident satisfaction among the process variables, with an effect size of 0.11 in our fully controlled model, and was robust across a range of sensitivity analyses.
Turning to the structural variables, another three variable groups were identified. We identified no significant effects in the OLS model with regard to ownership status. Upon controlling for municipality-level variables, a significant positive correlation with a magnitude of 0.06 in the fully controlled model was found, though the significance of the association was sensitive to variations in model specifications.
The Size of the NH was by a significant margin the most important predictor of resident satisfaction in this analysis, with the negative coefficient suggesting that smaller NHs are associated with more satisfied residents. A small decrease in the effect of this variable could be noticed upon controlling for municipality level effects, suggesting that larger NHs may be more common in municipalities where residents are on average, less satisfied with their NH care. The effect of size was robust in our sensitivity analyses.
The third group of structural variables included Nurses per resident, Staff per resident and Staff with adequate education, and was labelled Staffing. The group as a whole had an average effect size of 0.05 among the fully controlled models. With the exception of nurse staffing ratios, 95% confidence intervals consistently excluded zero in the main models, but the significance of the effect was sensitive to varying model specifications.
Taken together, the results of the analysis presented in Fig. 1 show that the structural measure Size of the NH was the most important predictor of resident satisfaction, followed by the processual Availability of exercise and social activity variable. The effects of the processual Individualized care variables and the structural Staffing variables were similar in magnitude, as was the effect of Private ownership, upon controlling for municipality-level effects. These effects were also sensitive to alternate model specifications. The processual Safe care variables were not found to have any significant association with resident satisfaction. Finally, a comment on the significant effects found among our control variables is in order. In our fully controlled model, self-rated health was found to have a strong positive correlation with satisfaction (standardized regression coefficient of 0.34) suggesting that healthier residents reported considerably higher levels of satisfaction. Among the municipality level controls, average NH resident age had a positive correlation with satisfaction, and average per capita taxable income had a negative correlation with satisfaction. Interestingly, no significant relationship between the amount spent per resident and satisfaction was identified. Full model summaries, along with a table reporting the data upon which Fig. 1 is based may be found in Additional file 1, p 12-15.
---
Discussion
In this study, we investigated a total of 12 variables representing different aspects of care quality reported in the NBHW unit survey. Of these, seven were considered to represent process-related quality, and five to represent structural quality. Our main findings were that the Size of a NH (a structural measure) had the greatest impact on resident satisfaction, followed by the processual measure Availability of exercise and social activities. The processual variables concerning Individualized care and the structural Staffing and Private ownership all had similar, weakly positive, effects on resident satisfaction. The processual Safe care variables had no significant effect on resident satisfaction. We found no clear differences in terms of effect sizes between processual and structural variables. Below, we discuss these findings in order of the effect size identified in our results.
The fact that NH size was the best predictor of resident satisfaction suggests that smaller NHs in Sweden had more satisfied residents than their larger counterparts. A recent literature review surveying studies examining the impact of NH size on quality outcomes showed size to be an important predictor of quality, with smaller homes generally having better quality outcomes [15]. None of the 30 studies investigated the relationship between size and resident satisfaction, though five investigated similar composite "Quality of Life" measures. There are however some indications that larger nursing homes may be associated with better clinical outcomes such as lower hospitalization risks [50] and lower rates of antipsychotic medication use [51]. NH quality is a multi-faceted concept, and it is not necessarily the case that the determinants of quality will affect all aspects of quality in the same way. As such, while this study does add to the evidence that smaller NHs are associated with the type of "soft" quality which resident satisfaction may be said to represent, the results should not be interpreted as saying anything regarding "harder" measures including clinical outcomes, the determinants of which may be quite different.
While size may be an important predictor of satisfaction in and of itself, it is also likely that there are causal mechanisms behind this association which mediate the effect of size. Previous research has for instance indicated that staff turnover may be lower [52] and staff continuity higher [53] at smaller NHs. The findings of this study thus emphasize the importance of identifying the more proximal mechanisms by which smaller NHs generate higher levels of satisfaction. The interpersonal aspects of nursing home care which these measures reflect are however difficult to measure, and investigating the mechanisms behind these softer dimensions of nursing home care may require a more qualitative approach.
The Availability of exercise and social activities was found to have the strongest association with resident satisfaction among the processual variables. Previous research has found that physical activity-related interventions can improve the subjective health status of NH residents [54], although other studies have found weaker or even negative effects [55]. Our results suggest that, overall, NHs which offer more frequent opportunities for exercise and social activity have higher levels of resident satisfaction. The effect of activity was not diminished by controlling for resident health or NH structure; rather, the effect increased slightly suggesting that the provision of such activities may be even more important at NHs with poorer structural preconditions, particularly with regard to facility size.
Three other variable groups had weaker effects with regards to resident satisfaction: Individualized care, Private ownership, and Staffing. The Individualized care variables included participation in resident councils, the use of individualized care plans and the use of meal routines. We identified no previous research regarding the impact of resident councils or the use of individualized care plans on satisfaction in the literature, though Lucas et al., [20] did identify a positive impact of similar "family councils". Our findings suggest that these quality improvement measures may indeed be associated with higher levels of resident satisfaction, although more directed studies are necessary to confirm this. There is some evidence that interventions to improve mealrelated processes are effective [56,57], and our results are consistent with a positive impact of such improvements on resident satisfaction.
The structural measures related to staffing had effect sizes similar to those found among the processual individualized care measures. Staffing as a determinant of care quality has been well researched. In a review of 70 articles, Castle [58] found a preponderance of evidence suggesting that increased staffing levels are positively associated with several measures of NH care quality. More recent studies by Castle and Anderson [59], Hyer et al. [60], and Shin and Hyun [61] point to similar results. However, none of these studies investigated effects on resident satisfaction. We found that both non-nurse staffing ratios and education levels were associated with resident satisfaction in all models, while nurse to resident ratios were significant upon controlling for municipal-level factors, and effect sizes were reduced upon controlling for other structural factors. Our results are thus consistent with a positive relationship between staffing levels and NH care quality.
Regarding the effect of ownership, the main results suggest a higher level of resident satisfaction among privately operated NHs after controlling for municipal level covariates. That is to say, while there was no overall difference in absolute levels of satisfaction, a difference was identified upon taking into account that public and private NHs are not evenly distributed across Sweden, and that when the effects of this non-uniform distribution was accounted for (in effect comparing NHs within the same county), a difference could be identified. The somewhat counter-intuitive effect could, at least in part, be explained by the tendency of private care providers in Sweden to establish themselves in municipalities with higher income levels, where resident expectations may be higher. This supposition is supported by the finding that average per capita income had a significant negative association with resident satisfaction (see Additional file 1, p 17). The significance of ownership status was not robust in sensitivity analyses however, and as such constitutes quite weak evidence for the superiority of private over public nursing homes with regards to resident satisfaction.
While we found no association between measures of safe care and resident satisfaction, it stands to reason that the processes which these measures represent (e.g. the performance of regular medication reviews and the existence of care coordination plans) are not immediately visible to residents, and are thus less likely to influence satisfaction. Studies investigating the impact of these measures on clinical outcomes may well find that they do have an effect with regards to quality in that respect.
Taken together, the findings of this study indicate that NH residents are more satisfied in smaller NHs, and NHs with frequent opportunities for physical and social activity. Only weak effects were identified with regards to processual individualized care measures, private nursing home ownership, and staffing levels. Formal routines had no effect on the satisfaction of residents. Another contribution of the study is the comparison of the effect of structural and processual variables on satisfaction. In contrast to a previous study on Swedish NH care [28], this study did not lend support to any firm conclusions regarding the superiority of one type of quality measure over the other. Rather, it was demonstrated that both structural variables such as size, staffing and ownership, and processual variables including individualized care and activities play a role in determining resident satisfaction. The difference in results between the two studies could be explained by the fact that the processual and outcome variables in the Kajonius and Kazemi study were both drawn from the resident survey (which we found upon factor analysis to be highly inter-correlated), while the structural variables they were compared with were drawn from a separate statistical database lacking this overall correlation. It is thus likely that the differential effects identified by Kajonius and Kazemi are an artefact of how the authors chose to operationalize the processual and structural measures. Furthermore, in the study data was aggregated at the municipal level, thereby investigating only differences in resident satisfaction between municipalities, which we found to account for only 10% of the total variation in satisfaction between NHs.
---
Strengths and limitations
This study was a secondary analysis of two nationally representative surveys collected for quality improvement purposes. A strength of the study is thus that the results are likely to generalize well to other contexts similar to that of Sweden, and the wide scope of these surveys allowed us to investigate and compare a broad range of factors. A limitation of the study was that the validity and reliability of these surveys has not been established in the publicly available literature, although the NHBW has analyzed the impact of loss to follow-up in the user survey [62], and performs ongoing internal quality assurance of the surveys it conducts. Another risk involved in the secondary analysis of data is the proliferation of "researcher degrees of freedom" arising from the numerous decisions which must be made in transforming and analyzing such data [63]. To ameliorate these risks, we sought to define our analysis strategy a priori, and provide the resources necessary to fully reproduce our results [48]. Another limitation is that the aggregate data used in this study precludes the interpretation results in terms of individual-level effects, and readers must be careful to not commit the "ecological fallacy" of interpreting effects operative at the NH level as applying to individuals.
Among other simplifying statistical assumptions including those of additivity and linear effects, we assumed that each question in the survey was equally important to residents in generating the composite measure used as the dependent in our analysis. Weighting each question equally would seem to be a reasonable assumption to make in the absence evidence regarding resident preferences, and the main findings regarding nursing home size and availability of activities were robust to a range of sensitivity analyses and alternate survey question weights.
It was common for the satisfaction surveys to be completed with the assistance of third parties, which could potentially influence reported outcomes, and while the rate of missing data was too high to include this variable in the formal analysis, a sub group analysis of homes reporting data on this variable may be found in Additional file 1, p 21-22. Based on our findings, we do not expect this factor to be a threat to the validity of our results. We also analyzed the associations present within the user survey data between NH level response rates and the quality measurements reported in the study. We identified a positive correlation between response rates and satisfaction rates, as has been found in previous studies of this phenomenon [64,65]. We also identified effects suggesting that response rates were higher at smaller nursing homes, and at private nursing homes (See Additional file 1, p 8). Previous studies have suggested that low response rates are likely to result in an over-estimation of satisfaction [64]. As such, bias resulting from the systematic differences in response rates would likely be in the direction of underestimating the association of size and private ownership with satisfaction.
---
Conclusions
Of the quality factors investigated, NH size had the most prominent association with satisfaction, followed by the availability of exercise and social activities. Processual measures relating to individualized care, such as participation in resident councils and the formulation of individualized action plans had a weak association with resident satisfaction, as did other structural factors such as staffing ratios and staff education. The results also suggested that privately managed NHs had a slightly higher level of resident satisfaction, though the effect was similarly weak and appeared only after adjusting for municipality-level covariates. The results in this study suggest that both structural and processual quality factors matter in determining resident satisfaction, with NH size and the availability of exercise and activities having the greatest impact.
---
Implications for policy and practice
While the findings in this study suggest a direct link between offering more activities and a higher rate of satisfaction, more research is needed to determine why residents appear more satisfied at smaller homes. It may be that the proximal causes of satisfaction at smaller NHs could be replicated at their larger counterparts, for instance by improving staff continuity and turnover. If so, this could be a cost-effective alternative to building smaller nursing homes. Qualitative studies using methods such as interviews and participant observation may be most appropriate to investigate such effects in more depth. Another policy implication is that activities for residents should be a priority in NH care, and in cases where NHs care is contracted out, offering physical and social activities should be a requirement.
---
Availability of data and materials
All data used in this study are publicly available. The data and code use to generate these results are available on Mendeley data at: https://doi.org/10. 17632/y69zhgxym3.2
---
Supplementary information
Supplementary information accompanies this paper at https://doi.org/10. 1186/s12913-019-4694-9.
Additional file 1. Analysis_notebook. This document provides additional details regarding the factor analysis undertaken to reduce the dimensionality of the data prior to regression analysis, additional details regarding the main analysis, and a number of post-hoc analyses undertaken to evaluate the sensitivity of the findings, and investigate a number of interesting findings suitable for pursuit in further research.
Additional file 2. Survey_questions. This document details the specific questions from the two NHBW surveys constituting the aggregate variables included as independent variables in the regression analysis reported in this manuscript.
Abbreviations FAMM: Five Aspects Meal Model; IQR: Inter-Quartile Range; NBHW: National Board of Health and Welfare; NH: Nursing Home; OLS: Ordinary Least Squares regression; SEK: Swedish Krona Authors' contributions DS, UW and PB conceived of and designed the study. DS performed the analysis and drafted parts of the manuscript. YL performed data cleaning, record matching, and drafted parts of the manuscript. All authors provided substantial input and revisions, and approved the final manuscript.
---
Funding
The study was funded by the Swedish Research Council for Health, Working Life, and Welfare (FORTE), dnr 2014-05134. The funding body had no role in the design of the study or collection, analysis, and interpretation of data or in writing the manuscript. Open access funding provided by Uppsala University.
---
Ethics approval and consent to participate
This study was approved by the Uppsala regional ethics review board (dnr 2017-342). A waiver of informed consent was granted by the review board.
---
Consent for publication
Not applicable.
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Severe Acute Malnutrition (SAM) is a serious public health problem in many low-and middle-income countries (LMICs). Therapeutic programs are often considered the most effective solution to this problem. However, multiple social and structural factors challenge the social inclusion, sustainability, and effectiveness of such programs. In this article, we aim to explore how poor and remote households face structural inequities and social exclusion in accessing nutrition-specific programs in Pakistan. The study specifically highlights significant reasons for the low coverage of the Community Management of Acute Malnutrition (CMAM) program in one of the most marginalized districts of south Punjab. Qualitative data are collected using in-depth interviews and FGDs with mothers and health and nutrition officials. The study reveals that mothers' access to the program is restricted by multiple structural, logistical, social, and behavioral causes. At the district level, certain populations are served, while illiterate, and poor mothers with lower cultural capital from rural and remote areas are neglected. The lack of funding for nutrition causes the deprioritization of nutrition by the health bureaucracy. The subsequent work burden on Lady Health Workers (LHWs) and the lack of proper training of field staff impact the screening of SAM cases. Moreover, medical corruption in the distribution of therapeutic food, long distances, traveling or staying difficulties, the lack of social capital, and the stigmatization of mothers are other prominent difficulties. The study concludes that nutrition governance in Pakistan must address these critical challenges so that optimal therapeutic coverage can be achieved. | Introduction
Worldwide, nearly 20 million children suffer because of SAM. Every year, half of the global childhood mortality is caused by malnutrition and one-third of these deaths are caused by SAM alone [1]. South Asia and Sub-Saharan Africa show the highest rates of underweight and stunting [2,3]. Almost 78% of wasted children belong to the three South-Asian nations: Pakistan, Bangladesh, and India [4]. As every sixth person in Pakistan lives in poverty [5,6], the rate of child malnutrition in the country is higher than those of other South-Asian nations [7].
Previous evidence around the globe shows that supplementary programs have reduced moderate and severe acute malnutrition in children [8][9][10][11]. Recently, governments across the world adopted multisectoral strategies to address the problem of malnutrition. These strategies combine nutrition-specific and nutrition-sensitive indicators. However, evidence showed that the multisectoral solution strategy remained less successful in achieving the desired results [12,13].
In Pakistan, a nutrition-specific CMAM therapeutic program was set up in Southern Punjab's poverty-stricken and flood-affected districts to deal with SAM. Under the CMAM program, moderate as well as noncomplicated SAM cases are treated with Ready-to-Use Therapeutic Food (RUTF), whereas complicated SAM children are referred first to the nutrition Stabilization Center (SC) by LHWs. Once complicated SAM cases become stabilized by using specialized medical milk 75/100, they can use RUTF. Ingredients in RUTF depend on the local acceptability availability and cost, but a standard RUTF is made up of milk powder, peanut butter, vegetable oil, vitamins, minerals, and sugar. The advantage of the product is its long shelf-life without refrigeration, but its demerit is its import as it is not prepared locally.
A sufferer's experience is a social product shaped by structural violence, which may be defined as violence "built into the (social) structure and shows up as unequal power and consequently as unequal life chances" [14] (p. 171). Kawachi et al. [15] observed that unequal health hazards for individuals are the product of social, economic, cultural, and political processes in society because health outcomes are curtailed by the exploitative apparatuses of resource distribution, power, and social control. Structural violence is indirectly exercised by different parts of the social machinery of oppression and is apparently "nobody's fault" [16]. Similarly, Quesada, Hart, and Bourgois [17] (p. 339) defined structural vulnerability as "a product of class-based economic exploitation and cultural, gender/sexual, and racialized discrimination and processes of symbolic violence and subjectivity formation that have increasingly legitimized punitive neoliberal discourses of individual unworthiness".
State institutions and programs ignore certain individuals based on caste, gender, and class, thus subjecting them to indirect violence. This results often in the failure of development programs [18], and children and mothers with lower social and cultural capital bear the brunt of this structural violence [19]. As these development programs often fail to achieve their stipulated targets, a lasting impact of such programs is that the poor in the target population becomes indifferent to similar interventions in the future. They tend to deprioritize health and normalize disease and malnutrition [20] rather than seeking to benefit from government intervention due to their negative experience of these programs. The legacies of underdevelopment, stigma, and discrimination, along with insufficient public healthcare systems, lead to poorer health outcomes for rural poor and ethnically marginalized households.
State institutions, development, and poverty alleviation programs often ignore the individuals belonging to poor, rural, and lower castes [21]. Inequalities based upon castes, gender, and class in South Asia have failed development programs because they marginalized poor and weaker members [18,19], which resulted in maternal and child health disparities [19]. In South Asia, the poor often face difficulties becoming beneficiaries; therefore, evidence [22] suggested that area, gender, caste, and class determinants of social exclusion must be advised for program objectives, eligibility criteria of clients, and the selection process. Social capital is required to access medical settings [23]. Along with it, studies have explained that the corruption within the government's medical settings in Pakistan and India showed a lot of parallels [24][25][26].
This study gives the narratives of healthcare providers and mothers of SAM children seeking treatment from the therapeutic program in the district of Rajanpur of Punjab province in Pakistan. This qualitative study contributes to the literature by describing barriers and resources while accessing nutrition-specific services. The study focuses on the issues of health sector corruption, structural inequalities, and the role of social capital. It adds to critical medical anthropology and the public health literature. It also investigates challenges and barriers to health and therapeutic coverage, why the government lacks interest in the implementation of the nutrition-specific program, and how the poor are generally secluded.
---
Materials and Methods
---
Data Collection
The qualitative data for this study were collected during fieldwork in the Rajanpur district of South Punjab from January to May 2017. This area was selected purposefully because it was flood-affected, poverty-stricken, and where female illiteracy and maternalchild malnutrition rates were highest in the whole province. Development infrastructure such as healthcare facilities was also scarce and where rural poor women face disparities.
This exploratory study was based on a purposive selection of key stakeholders involved in the CMAM program including healthcare providers and mothers of malnourished children (Table 1). After reviewing the available literature and using keywords such as social barriers and structural challenges to therapeutic coverage [10,[27][28][29], a semi-structured interview guide was developed for interviewing, which was pre-tested with a few respondents and also updated from time to time whenever more information about the issue was revealed during the fieldwork. Exploratory research, as a methodological approach, investigates those research questions that have not previously been studied in depth. Exploratory research is often qualitative, involving a limited number of respondents, but is in-depth in nature. Therefore, only the most relevant stakeholders were interviewed: healthcare providers first (supply side), because they might have been cooperative in introducing other key stakeholders, i.e., mothers of malnourished children enrolled in the therapeutic program. Thus, in the next phase, mothers of malnourished children (demand side) were interviewed for this study. First, Key Informants Interviews (KIIs) with key officials of the District Health Authority were conducted face to face by the principal author who has experience in public health nutrition and knowledge of medical anthropology. Secondly, a Focus Group Discussion (FGD) with LHWs was conducted in a healthcare facility by the two qualitative researchers (F.A. and S.Z.). In the group discussion, 10 participants were maximally allowed to take part. Participants of this discussion were inquired about the major difficulties, barriers, and challenges that hampered therapeutic coverage at the district level. Finally, healthcare providers helped to identify and communicate with mothers having SAM children. The mothers of malnourished children were identified by the Nutrition Assistants appointed at SCs and LHWs involved in the CMAM program. To seek consent to take part in this research, 30 mothers were informed about the nature of the study. However, consent could be agreed upon by 20 mothers. We chose the respondents' places deliberately so that they felt safe and comfortable. Audio recorders were not used, owing to locals' comfortability and cultural sensitivity. In-Depth Interviews (IDI) were in a flexible format, ranging from one to two hours. All interviews were conducted face-to-face in the local language (Seraiki). The open-ended in-depth interviews continued until experiences and essences were repeated, and until information saturation was achieved through 10 mothers (Table 1). The majority of the mothers of SAM children were either uneducated or had a few years of schooling along with minimal socio-cultural capital and disadvantaged economic status (i.e., <USD 100/month).
---
Data Analysis
Without delays, researchers translated verbatim all the qualitative data obtained from group discussions, semi-structured interviews, and field notes from the local language to the English language. Then, we reviewed all the raw data available and labeled all sentences and text into different colors and codes to find out the common meanings. After this, we grouped similar codes to create broader categories. Next, we had to cross-verify the narratives and remove the inconsistencies, vagueness, and discrepancies. Lastly, codes and categories were analyzed and different themes that affected therapeutic coverage were identified using inductive research methods. In total, seven prominent subthemes subsequently emerged from the whole exploratory qualitative data. In the end, all conspicuous challenges, barriers, and difficulties were subsequently assembled into five leading themes:
(1) politico-economic or financial, (2) administrative and planning, (3) logistical, (4) social or cultural capital, and (5) behavioral or interactive (see Figure 1).
---
Data Analysis
Without delays, researchers translated verbatim all the qualitative data obtained from group discussions, semi-structured interviews, and field notes from the local language to the English language. Then, we reviewed all the raw data available and labeled all sentences and text into different colors and codes to find out the common meanings. After this, we grouped similar codes to create broader categories. Next, we had to crossverify the narratives and remove the inconsistencies, vagueness, and discrepancies. Lastly, codes and categories were analyzed and different themes that affected therapeutic coverage were identified using inductive research methods. In total, seven prominent subthemes subsequently emerged from the whole exploratory qualitative data. In the end, all conspicuous challenges, barriers, and difficulties were subsequently assembled into five leading themes: (1) politico-economic or financial, (2) administrative and planning, (3) logistical, (4) social or cultural capital, and (5) behavioral or interactive (see Figure 1).
---
Ethical Considerations
The ethical approval for this study was acquired from the advanced study and research board (AS&RB) of Quaid-e-Azam University Islamabad in its 307th meeting held on 20 October 2016. The board committed to approving the endorsements of the Dean of the Faculty of Social Sciences to accept the current qualitative and ethnographic research in the Department of Anthropology. In addition to this, the Department of Health District Rajanpur also approved the study protocols and tools. All the participants were thoroughly informed about the nature and purpose of this study before taking their formal consent to be part of this exploratory qualitative research. As the majority of mothers were illiterate, oral consent was provided according to their wish and comfortability. After taking informed verbal consent from all study participants, we promised to ensure their anonymity, privacy, and confidentiality.
---
Results
Our overall qualitative findings revealed the emergence of multiple financial, administrative, logistical, and behavioral difficulties that challenged the CMAM therapeutic program for the treatment of severely malnourished children in the Southern Punjab region of Pakistan.
---
Financial Barriers
Health priorities at the micro-level are influenced by macro-level incentives. Funding for different national or provincial health or nutrition programs determines the focus of health staff.
---
Funding and Priorities of Health Bureaucracy
The national Polio Eradication Program, being the most favorite program, was prioritized by the health bureaucracy. The health department used most of its energy in this program and deprioritized others.
"Although the nutrition program has been functional for many years, the staff isn't free to run this at the district level. The health office gives importance to their routine matters and does not let this kind of vertical program be implemented in full scale and strength". (Nutrition Official, KII)
---
Work Burden on LHWs
LHWs coordinate between the community and health department; therefore, they were involved in almost every program, whether provincial or national. They frequently complained that they faced extra work pressure and burden, particularly from the Polio eradication program. Their primary duty was to cover and coordinate with more than two thousand pregnant and lactating females in their concerned outreach areas. Over-involvement reduced their concentration in their original work about child and mother work. The over-involvement of LHWs de-prioritized nutrition activities by the health department.
"LHWs are involved in other programs, especially Polio. After working three to five days in the Polio campaign, one LHW would not go into the field because she is already tired. Similarly, in Measles, LHW is fully engaged for 12 days and becomes so fatigued and rarely visits the field for some days, and demands rest. When the department asks working overly and extensively, how she can fill the high gaps created in the nutrition program. This pressure is regular; Polio and other activities are unfinishable". (LHW, FGD) "Funding availability in the Polio eradication program was the leading cause of why the health department Punjab always engaged LHWs for only this at the stake of another important program because their funding was low or none. It was owing to this fact that LHWs always wandered for Polio drops and skipped nutritional screening and education". (LHW, FGD)
In Southern Punjab, several LHW posts were vacant according to reports of the district health information system. Out of a total of 900, only 650 LHW seats were filled, which showed that the covered-up population in the district was 44%. LHWs felt dissatisfaction with the lower salary packages and other allowances. Logistical and cultural hurdles, along with extra workload, jointly restricted their will and motivation. On many occasions, many of them took this duty as no more than a formality just because they could not merely refuse orders from the department. As a result, they ignored visiting assigned households regularly due to low salaries and poor economic incentives. Many LHWs were not well trained in anthropometric measurements of mothers and children for screening purposes. Unfortunately, these LHWs in the least developed areas were not appointed or even remained absent. Many of these LHWs reported that their performance was perfect, and they tried to justify their role. They always report that everything was going well. One official remarked:
"LHWs are called almost every week, sometimes for meetings, sometimes for training, or sometimes for another task. She has to maintain and carry multiple registers. I mean, it's a serious matter that needs to be seen and fixed. The patients from remote rural and tribal areas are missed; SAM cases are from remote areas, where there is a water problem, and access is limited. So cases mostly come from rural areas". (Health Official, KII)
---
Administrative and Planning Failures
The training of field staff, screening, referral of SAM cases, and distribution of therapeutic food are compromised due to the weak administration of the program.
---
Improper Utilization of Nutrition Field Staff: Lack of Training
In 2008, the Government of Punjab recruited Health and Nutrition Supervisors at the BHU level to screen and train the community on common health diseases and nutritional issues. However, many of the remote BHUs missed them as there was no infrastructure. Since their creation, they have barely taken part in any significant nutrition intervention in the district. Their role in CMAM was never acknowledged until recently when the multisectoral nutrition center (MNSC) at the provincial level anticipated their future participation in the province of Punjab in a report in 2017. They were not fully trained on nutritional issues and, hence, lacked relevant knowledge about the causes and treatment of malnutrition. It was reported that an international organization (Micronutrient Initiatives (MI)) had trained them on the importance of micronutrient iodine for mothers and children. Therefore, these supervisors were mostly assigned monitoring duties for Polio, EPI, and dengue prevention programs instead of nutrition. However, they were properly trained on malnutrition for the first time in 2017 after 9 years of recruitment, which showed the lack of coordination and failure of the precise job description. This also showed a weak commitment to combatting malnutrition, lack of vision, and relevant policy failures. The staff appointed at remote health units rarely performed duties because of insecure environments, a lack of monitoring mechanisms, dilapidated hospital buildings, damaged roads, and a lack of transport facilities. These isolated areas are those where more attention is needed. Most recently, new district coordinators were recruited by the multisector nutrition center who also need training on nutrition issues.
"There are gaps . . . .as the district coordinator of the malnutrition addressing committee has only one or two meetings with the Deputy Commissioner of the district. Also, MSNC established by the Planning and Development Commission of Punjab province has recruited district coordinators, but they are new and have no significant work to do. Nutrition supervisors are also not so trained and involved, nor can they help measure and refer malnutrition cases, but their involvement is limited to the polio program. Although all these have been appointed, they have no work to do, except work on special weeks. Recently, we called nutrition supervisors on nutrition week. They were assigned to distribute multi-nutrient sachets in their schools as area in-charges, but they are not really in much coordination". (Health Official, KII)
---
Weak Referral, Indifference, and Interpersonal Conflicts among Staff
Intrahospital or staff interpersonal politics at the Basic Health Units (BHUs) level emerged as one of the most significant reasons behind the weak referral of SAM cases to the SC at District Headquarters Hospital (DHQ). It was remarked that:
"The cases which reach DHQ without a referral are admitted right away, but SAM referral is constrained and slow, especially, people from remote rural areas are in great need because of the weak and poor referral system to the Stabilization Centre. Every month LAMA (who quit treatment) cases are increasing; 4-5 SAM cases are admitted daily, totaling approximately 120-150 in one month. Most of these cases are located at the basic health unit (BHU) level. For the treatment of SAM, it is very difficult to screen a child with a complication from the field by these LHWs through Mid Upper Arm Circumference (MUAC). LHW refers these SAM cases to Lady Health Visitor (LHV) who has to verify MUAC and complications, and forward complicated SAM cases to DHQ by an "1134 ambulance service". (Nutrition Official, KII) After the anthropometric screening, LHWs generally referred malnourished mothers and children to BHUs and Rural Health Centers. However, many mothers were kept waiting unnecessarily by Lady Health Supervisors (LHSs) appointed at BHUs. Many poor and illiterate mothers left health units because they felt they were being ignored, unattended, and devalued by these LHSs.
"LHW and LHV are often at odds with each other. Sometimes LHS dislikes an LHW, who insists on checking children immediately. Every LHW expects that she has hardly convinced and referred parents of SAM case to BHU, so now LHS should give priority so that it could be further referred to Stabilization Centre at DHQ. LHS asks LHW to 'wait outside' and does not attend to the case even after hours. This is how SAM cases leave hope for treatment and run away, and this is why referral of severely malnourished children with complications is minimum. However, a child specialist and nutrition staff, specified for this work only, are readily available at SC; therefore, SAM cases are measured and admitted without trouble. However, people from only nearby areas can reach directly to SC, but cases from remote areas have to be ignored". (Nutrition official, KII)
---
Lack of Monitoring and Medical Corruption
The presence of formula milk companies inside hospitals and the sale of not-for-sale RUTF were two significant factors. Although banned theoretically, representatives of the multinational formula milk were reported to move freely in the SC, BHUs, and RHCs for advertising and selling formula milk to the poor parents of severely SAM children.
"Soon after recovering from complicated SAM, mothers were motivated to try their products. The company trains its agent to remain alert and keep an eye on every person monitoring and conducting research. They are well trained in rapport building with medical staff and patients' attendants for convincing them to use their products after the advertisement. Nobody ever restricted such active advertisement and sale". (LHW, FGD) Therapeutic food was reported to be sold out at the hands of some LHWs. These packets of therapeutic food are not for sale. It was informed by community members that the Plumpy-Nuts were being sold out at some places at the price of PKR 20-30 per sachet by a few LHWs. A mother indicated: "I requested our LHW to give some food but she refused. I threatened one such LHW who used to sell it by saying, 'give some sachets for my son, or else I would complain against you that you sell off the therapeutic food illegally.' Never were any actions taken against such complaints by the concerned authorities". (Mother of SAM child, IDI) "The distribution of therapeutic food is not altogether transparent and fair. Health staff often prefer and prioritize their relatives and close ones first whenever the task of providing therapeutic food is given to them". (Mother, IDI)
---
Lack of Social and Cultural Capital among Poor Mothers
Relationships with those who control power, access to information, and interpersonal skills necessary to communicate are essential requirements for becoming beneficiaries of development programs.
---
Rural-Urban Disparities: Accessing Therapeutic Program
When asked whether the field staff visited your area or household and what were the impacts of therapeutic food, most respondents agreed that the milk provided at the stabilization center and RUTF had a good impact on the sick child. The majority of the enrolled mothers in CMAM showed that their children were recovering gradually. In their opinion, the specialized medical milk (75/100) and RUTF brought a positive impact on their severely malnourished children. When asked how mothers came to know about the treatment of severely malnourished children at stabilization centers or CMAM, most parents revealed that they were referred by the medical community or people from urban centers told them about this program and suggested visiting the nutrition Stabilization Centre at DHQ to obtain special milk (75/100) for malnourished babies.
"Doctors, LHWs, and active community members helped to refer us to the CMAM program and SC, for therapeutic 75 milk for the severely malnourished baby". (Mother closer to the city area, IDI) "LHWs visited our area and told us to bring milk from CMAM staff; vaccinators also visit and inform us about the program". (Mother from Peri-urban area, IDI) "LHWs do not visit our area, but vaccinators do once a year so we sometimes bring our children to the hospital for immunization and sometimes not. People from the city informed us about this program; they suggested us to visit Stabilization Centre because milk [75/100] was being distributed there". (Mother from the remote village, IDI) Only a few parents reached the SC without any referral, which implies how different local forms of social capital or relationships helped mainly urban or peri-urban families to become beneficiaries and isolate and seclude the majority of the most deserving remote, rural, illiterate, and lower-income families with lower social capital.
"We, the females, are carrying this unfortunate child without any help from other family members. I am a mother, how can I leave him alone in this condition, only my heart knows how much disturbed I am. No one can realize the state of my heart; I cannot see my child suffer. I am in profound psychological distress. When will my child feel normal and healthy, I don't know. I have tried my best to make him healthy and nourished. We have wandered everywhere, here and there, to find if someone could suggest a better way. Recently a person from our neighborhood informed us about this program, I requested my mother to test this place [Stabilization Center] too". (Mother from the remote village, IDI)
---
Logistical Difficulties
Treatment of complicated SAM children requires their mothers and other caregivers to stay at the SC for some days until the child is stabilized and can come to the simple RUTF stage. However, most mothers complained they had to leave medical advice due to logistical hurdles.
---
Geographic Seclusion: Difficulties in Traveling
The poorest of the poor mostly live in risky, far remote, and underdeveloped areas. Geography is one of the central causes of inequities in health and nutrition. Results showed that distance emerged as a substantial barrier to coverage and access to health and nutrition programs. Logistical problems emerged as the most significant reasons for low access. The bad transportation, long travel times, damaged roads, and long distances to the site were the major determinants of little coverage. Females are less empowered in these settings due to the lowest access to healthcare facilities and literacy and employment opportunities. One mother stated, "We are tired and we still have to travel". The mother informed that they reached the stabilization center after much difficulty and running errands:
"The [Nutrition stabilization] center is very far from our village, and it took hours to get there. We had to catch several types of transport; the first motorcycle from our community to another town, then an auto-rickshaw to the main highway. After it, we had to catch a bus from the road to reach the district bus stand. From the bus stand to the hospital, we had to hire an auto again. After wandering here and there madly in the hospital building, we reached the stabilization center by asking for addresses with the help of so many people. We got tired when we arrived here, and we still have to travel, we'll have to go back home as it is not allowed to stay without permission". (Mother of complicated SAM Child, IDI)
---
Problems Related to Staying at the Stabilization Center
Many mothers insisted on the hospital staff that they wanted to treat the complicated SAM children at home. SC staff had objections to this idea because the condition of the severely malnourished children was unstable, and they were required to stay until the children became stable in the center. The Punjab government previously claimed to set up SCs at the sub-district level, but activities at the SCs were being limited at the district level, and at any time, the program may come to an end. UNICEF in Pakistan has recently intended to study the bottlenecks in the CMAM program. This indicates that these programs are still under the control of UN agencies and the government lacks ownership.
"Convincing parents about the treatment at SC is a very complex task. Mental preparation of family and parents is essential for this because a mother or someone from the family has to stay for at least four days. They have to prepare their basket or bag". (LHW, FGD)
The other strong reason for low therapeutic coverage was the loss of income if the mother and father were to stay at the SC receiving the treatment for only one severely malnourished child and ignoring the rest of their children. This made them indifferent to complete treatment. Therefore, most grandmothers had to stay at the SC. Mothers could not stay longer, because no one could take care of the rest of the children at home. During crop season, poor rural mothers could rarely afford to give proper time for treatment and health-seeking. Some domestic servants also complained about working hours. As they could not escape from their duty, they delayed check-ups and treatments of complicated SAM children. If mothers had to stay, they had to bring all of their kids along with them to the SC at the district headquarters hospital. As children were unaware of cross-infections at the sites, they were playing in the hospital's wards, touching the floor with their hands, and eating foods there without handwashing.
---
Behavioral Problems with Nutrition Staff
Another critical factor of low coverage of the therapeutic program in rural and Southern districts of Punjab province in Pakistan involves the elements of stigma, respect, and dignity.
---
Stigmatization of Patients and Attendants
Many poor parents felt stigmatized and complained of being unattended at the hands of the hospital and nutrition staff. Illiterate people with low socioeconomic status had low confidence to communicate with hospital staff and feared being insulted by the doctor and staff. The behavior of the staff was not supportive. Sometimes staff felt irritated by the poor's dirty clothes. CMAM staff was often reported to have been rude to mothers of severely malnourished children.
Multiple times, mothers indicated taunts and offensive remarks and the mothers felt ashamed of this embarrassing situation. For example, on one occasion, a nutrition assistant at the SC vocalized to a mother, "you are always here to get this milk". Once, a female nutrition staff member threw the packets of formula milk 75 toward a mother in a very disgusting mood and said angrily, "hold this packet and get out". In another instance, when a poor mother brought her child to the stabilization center for the treatment of SAM, the on-duty staff responded "take your dirty luggage from here; it smells stinky".
---
Not Being Attended
Complaints about not being attended to by low-income parents were much more common. Mothers explained how the staff at the nutrition stabilization center was indifferent, careless, and rude.
"We would wait all day and night, but no person attended a little. The sick child used to cry all night as they would give our child nothing to eat and drink. We were worried when the doctor and staff would pay attention to our child. Leaving such treatment [of indifference and disgust] would be better than just wasting time [in wait] here". (Mother at SC at DHQ, IDI) "My husband said to SC staff 'my child is hungry, and you pay no attention. I do not want to leave my sick child as hungry all night.' Nurses complained about my husband to the head doctor, who called him and insulted him. My husband got disheartened and finally decided to quit the treatment at this center". (Mother at SC at DHQ, IDI)
---
Discussion
This study discovered mothers' interactions with the biomedical treatment and therapeutic system of the CMAM program and nutrition stabilization center. It specifically explored how poor, illiterate, and rural women were often incapable of navigating the therapeutic coverage and politics with institutions. These difficulties were perilous for many women, mainly from remote and secluded areas, who were illiterate and lacked the required minimum cultural assets and social skills to negotiate the complex and unfamiliar setting [29]. Women's communications with the health and nutrition staff illuminated how the administration strengthened health and nutrition inequities. Barriers related to geography, income, fears of maltreatment, and discrimination emerged as most striking and significant for the rural poor struggling to receive therapeutic care through the public healthcare system [20,30]. Many families could not access the CMAM program, owing to multiple socio-cultural and logistical reasons [27,28]. The staff of development programs often secluded poor mothers and children due to multiple power dynamics [22,23,31].
Families, who had some links within local power circles (social and cultural capital), received better chances of coverage. To combat the problem of malnutrition, the government needed to change priorities. At the primary level, deprioritization of the "nutrition program" in comparison with the "Polio eradication program" resulted because of heavy international funding for the latter. This suggests that the government must increase funds for nutrition [32]. Further, the burden and pressure on the LHWs must also be curtailed by focusing their attention on maternal-child health and nutrition programs. In remote areas, seats for the LHWs ought to be urgently allocated [33]. Nevertheless, all these steps require road construction and infrastructure provisions at basic health facilities. Human development infrastructure at the local level is also required in South Punjab, which is always facing regional or ethnic inequalities [34].
The poor rely on traditional treatment methods because of their low income. Stigmatization and the trust deficit of the poor in government departments are strong indicators of low biomedical service utilization along with expensive and uncontrolled private clinicians' prevalence that need urgent policy decisions. This study showed that the medical staff did not care much about the marginalized victims of social stigma and ignored the poor's feelings [35]. The future design and implementation of government programs must be made more socio-culturally sensitive. Plumpy nuts are effective only in emergency contexts but not in chronically poor settings, and such programs also create a dependency of low-income states on international companies, which prepares such foods. In addition, therapeutic food is not available in usual and regular circumstances, even though people need it. The permanent solution, therefore, lies not in treating the individual body but in searching for a cure for a social body through political-economic means of social justice and equity [36].
Evidence showed that nearly half of the population in several rural districts was not covered by LHW, especially in the most remote and the poorest areas [37,38]. UNICEF [39] has highlighted that the neonatal mortality rate was reduced in low-caste groups where LHWs made weekly visits in rural Indian Punjab. Recent studies [40,41] similarly demonstrated that sufficient training, financial compensation, and close supervision of community health workers are imperative for the successful delivery of SAM treatment along with the adequate quantity of ready-to-use therapeutic food.
Some respondents revealed that therapeutic food was being sold off by LHWs, and representatives of formula milk producers were free to move into hospital settings. There is evidence that formula milk companies ignore the laws and continue marketing their products inappropriately [42]. The literature from Pakistan and India shows that corruption within medical settings restricts government services [26,43]. While drawing upon the anthropology of the state along with the perspective of structural violence, Gupta found that funds hardly reach their anticipated beneficiaries but mostly reach people with political acquaintances, cultural capital, and financial influence [44]. Inaccurate systems of information based on statistics, conflict, and wide-scale corruption in Indian bureaucracy systematically isolate and ignore the poor. Similarly, examining the "Government of Papers, in Pakistan", Hull [45] analyzed how the bureaucratic processes and management of records crafted partnerships among people as the core apparatus and governing emblem of the official measurement of bureaucracy. For him, papers should be seen "as mediators that shape the significance of the linguistic signs inscribed on them" [45] (p. 13), which shows that postcolonial bureaucratic records are materialized under the colonial policy of keeping government and society isolated.
Many poor, illiterate, and rural mothers indicated that they faced rude behavior and stigma in medical settings. In a study in the Kenyan context, analogous shame, stigma, and discomfort at health clinics related to malnutrition and fear of mistreatment at the hands of the biomedical staff were noted as the most significant barriers to treatment for childhood acute malnutrition [46], which potentially constrained their access to the CMAM program. Chary et al. [20] argued that childhood diseases are treated incompletely because of the perception that the child is not being attended to. They linked the phenomenon of "not being attended" with healthcare inadequacies.
Our findings showed that mothers faced logistical difficulties. Evidence in Guatemala similarly showed that poor women suffered from running errands [27]. Similar evidence showed that therapeutic programs in five African countries failed because of the low awareness about the program, long distances, the handling of rejection at sites [29][30][31], and the centralization of the program [47]. The study's findings corroborate that distant communities remained potentially disadvantageous to be covered by therapeutic programs particularly for the treatment of complicated SAM because caregivers had to stay for many days at the therapeutic center [29], more often adjacent to the children's hospital. Evidence [48] from the adjacent Sindh province of Pakistan also showed that remote areas were less exposed to the therapeutic program, and the common barriers included the low awareness of malnutrition and its services, the children's disapproval of RUTF, long distances, and high opportunity cost. This study also found that remaining in the program until full recovery was difficult.
This article monitors mothers' interactions while accessing the nutrition-specific CMAM program. In doing so, it proposes that a "politics of neglect" is at play in these programs in neglecting the social body and poorer sections of society in the program's target areas. These interventions do not consider processes of power and exploitation and ignore the complex and unequal social relations. The narratives showed how the poor often faced structural inequities and social exclusion due to a lack of social or cultural capital. Evidence showed that the poorest of the poor and low-caste families with the lowest social capital in Punjab were excluded from the cash transfers program (nutrition-sensitive program) at the will of local political leaders [49]. Some of the literature found similar results that only people with approaches and links to local politicians could be successful in becoming beneficiaries of the income support program in Pakistan [50]. Families with lower socio-cultural capital suffered the most because of the lack of transparent and impartial social protection policies and social safety nets. The literature from other contexts on so-called bureaucratic hurdles has highlighted such misery of poor women facing structural inequalities and the indifference of bureaucracy toward the poor people who have no relationships with influential notables [51]. The lack of social and cultural capital deprives the poor of their due rights, despite deserving them. On the other hand, people with such capital were witnessed on several occasions, becoming beneficiaries even if they did not deserve it well.
According to Bourdieu [52], cultural capital plays a vital role in taking benefits from society. When they are guided to adopt specific procedures, illiterate mothers cannot remember the steps and names of officers. The poverty eradication or development programs preferably target the better-off, ignoring many poor of the poorest who have never been taken seriously by the bureaucratic structure of the development programs. When resources are limited, competition is high; therefore, the humanitarian apparatus has to be narrow in its scope, leaving many deserving and potential beneficiaries far behind [53]. In Pakistan, poverty is extensive. The poorest are deprived because they lack links and relationships with people in power. Pakistan is not a place where resources are equitably distributed and where the population is also under control. This bureaucratic structure does not let the poor and weaker enter their offices unless an officer, lawyer, politician, or any other notable accompanies them. The poor often endure social and structural difficulties in the process of being beneficiaries, so knowledge about social exclusion is fundamental to advise on program objectives, eligibility criteria of clients, and the selection process [53].
In addition, CMAM is a short-term curative measure, especially in emergency contexts. Not aligned well with local socio-cultural realities, the short-term global technical solution in the form of RUTFs and CMAM was implemented "under neoliberal governments and facilitated an increasingly inequitable economy with minimal state involvement in an increasingly individualistic social environment" [54] (p. 16). However, the permanent, long-term, sustainable solution to maternal child undernutrition lies in females' socioeconomic emancipation, and their health or nutrition literacy [55][56][57][58][59]. In addition, the inclusion of training of medical staff on respectful care is imperative.
---
Conclusions
The CMAM program in Southern Pakistan encounters multiple social, economic, and structural obstacles. First, funding in nutrition as compared with other programs deprioritizes officials' interest in nutrition and involves LHWs in other multiple tasks that increase their work burden and divert their attention from maternal-child health and nutrition. In addition, the corruption in food distribution and the unethical sale of RUTF by LHWs are reported, which need strict monitoring and fair dispensation. The normalization of social exclusion has roots in politico-economic and structural inequalities. The study includes the following recommendations: prioritizing more funding for nutrition; proper training of field staff; improving screening skills and referral of SAM cases; providing traveling incentives to needy, illiterate, and rural mothers; devolving the child stabilization service at the micro (UC/BHU) level; distributing RUTF fairly by LHWs; treating parents politely. Finally, vacant LHWs' seats in remote rural areas demand urgent allocation.
---
Data Availability Statement: Not applicable.
---
Acknowledgments:
The input, contribution, and support of all those who provided generous data for this study are acknowledged here.
---
Informed Consent Statement: All respondents were informed about the nature and purpose of the study before taking their formal oral consent. In addition, we strictly ensured the privacy, anonymity, and confidentiality of all study participants.
---
Conflicts of Interest:
The authors declare no conflict of interest. |
work situation. Better medical equipment (including drugs), better protection for their own mental and physical health, more (assigned) personnel, more comprehensive information about the symptoms of the disease, and a system of earlier warning were the primary lessons to be learned in view of upcoming waves of the pandemic. |
The coronavirus disease imposes an unusual risk to the physical and mental health of healthcare workers and thereby to the functioning of healthcare systems during the crisis. This study investigates the clinical knowledge of healthcare workers about COVID-19, their ways of acquiring information, their emotional distress and risk perception, their adherence to preventive guidelines, their changed work situation due to the pandemic, and their perception of how the healthcare system has coped with the pandemic. It is based on a quantitative cross-sectional survey of 185 Swiss healthcare workers directly attending to patients during the pandemic, with 22% (n = 40) of them being assigned to COVID-19-infected patients. The participants answered between 16th June and 15th July 2020, shortly after the first wave of COVID-19 had been overcome and the national government had relaxed its preventive regulations to a great extent. The questionnaire incorporated parts of the "Standard questionnaire on risk perception of an infectious disease outbreak" (version 2015), which were adapted to the case of COVID-19. Clinical knowledge was lowest regarding the effectiveness of standard hygiene (p < 0.05). Knowledge of infectiousness, incubation time, and life-threatening disease progression was higher, however still significantly lower than regarding asymptomatic cases and transmission without physical contact (p < 0.001). 70% (95%-confidence interval: 64-77%) of the healthcare workers reported considerable emotional distress on at least one of the measured dimensions. They worried significantly more strongly about patients, elderly people, and family members, than about their own health (p < 0.001). Adherence to (not legally binding) preventive guidelines by the government displayed patterns such that not all guidelines were followed equally. Most of the participants were faced with a lack of protective materials, personnel, structures, processes, and contingency plans. An increase in stress level was the most prevalent among the diverse effects the pandemic had on their
---
INTRODUCTION
Several types of human coronaviruses with low pathogenicity had been studied before the severe acute respiratory syndrome (SARS) emerged in 2002 in China (Drosten et al., 2003;Ksiazek et al., 2003;Peiris et al., 2003). SARS spread to at least 29 countries in Asia, Europe, and North and South America, with a total of 8,098 infections and 774 SARS-related deaths reported (Kahn and McIntosh, 2005). The virus that causes the presently spreading human coronavirus disease, named COVID-19, was first noticed in Wuhan, China, in December 2019, and it resembles the prior SARS (Ali S. A. et al., 2020;Liu et al., 2020;Wu et al., 2020). The infected typically experience symptoms similar to those of a common flu, with an estimated 80% showing only mild symptoms (Hafeez et al., 2020). As of 22nd December 2020, 76,023,488 cases and 1,694,128 deaths have been reported due to COVID-19 worldwide (World Health Organization, 2020a). For Switzerland, there have been 402,264 cases and 5,981 COVID-19related deaths reported to this date (World Health Organization, 2020b) compared to a resident population of 8.606 million (by the end of 2019, Federal Statistical Office, 2020). The first COVID-19 case in Switzerland was registered on 25th February 2020 (Scire et al., 2020). The first wave of the pandemic took place in late March and early April 2020. By 23rd March, the effective reproductive number (Re)1 had decreased below one (95%confidence interval below one), as depicted in Figure 1, and the first wave was overcome by late May 2020, in the sense that daily new cases had decreased to single digits (Our world data, 2020). Shortly thereafter, the survey was conducted from 16th June until 15th July 2020. The subsequent second wave has recently grown significantly more severe than the first wave, with a maximum 7day average of 8,064 daily new cases reported on 2nd November 2020, which equals 94 daily cases per 100,000 inhabitants (Swiss Federal Institute ETH, 2020).
The COVID-19 pandemic has induced a global crisis with unusual health-related and economic challenges. It has been claimed to have caused "a significant global shock" (Mishra, 2020) and has even been named "catastrophic" (Maliszewska et al., 2020). As a consequence, the psychological health of individuals and families has been greatly affected, particularly regarding issues such as stress, states of shock, fear, existential anxiety, and grief (Pawar, 2020). Switzerland is no exception. The first wave of the COVID-19 pandemic led to drastic measures by the Swiss federal government, including the mobilization of several thousand Swiss citizens through the militia system of the Swiss army (the greatest mobilization since World War II) (Federal Council, 2020a;Federal Office of Public Health, 2020). The most restrictive phase took place from 16th March until 26th April 2020, which has popularly been referred to in Swiss media as the "lockdown" (Abhari et al., 2020;Neue Zürcher Zeitung, 2020a). Registered unemployment increased from 121,018 to 153,413 people between January and April 2020 (+26.8%, State Secretariat for Economic Affairs, 2020a). After the precautionary measures had been gradually relaxed following 26th April, the Federal Council and the Federal Office of Public Health intensified the measures again in October 2020 in reaction to the second wave (Federal Office of Public Health, 2020). Several branches of the Swiss economy have been under considerable pressure (State Secretariat for Economic Affairs, 2020b), and prognoses for the near future remain unfavorable (State Secretariat for Economic Affairs, 2020c). By the end of November 2020, 153,270 people were registered as unemployed, amounting to an unemployment rate of 3.3% (State Secretariat for Economic Affairs, 2020a). Accordingly, the pressure on the economy is still high, as is the strain on the psychological health of the population, given this ongoing phase of restricted public and private life, economic uncertainty, health hazard, and loss.
Healthcare workers are a primary group on which the COVID-19 pandemic has imposed extraordinary challenges. This has clearly been recognized in the international literature. As first responders in providing care, they have been exposed to feelings of stress and uncertainty, while working long hours and often not being fully protected against an infection (Shaukat et al., 2020). The risk of testing positive for COVID-19 is high among healthcare workers (Nguyen et al., 2020), which, combined with the responsibility they bear for their patients, has exposed them to ethical dilemma (Menon and Padhy, 2020). As private citizens, they have also had to cope with posing an increased infection risk to their social environment. Even being depicted as "heroes" by the media can in fact be counterproductive, as it increases their perceived pressure (Cox, 2020). This situation can significantly affect their mental health and even lead to work-related trauma (Probst et al., 2020;Vagni et al., 2020). Many healthcare workers have been documented to have developed mental issues for which they require psychological support (Lai et al., 2020). This is a clear indication that, besides infrastructural considerations, also the individual capacities of healthcare workers, including their psychological well-being, are a crucial ingredient in facing a pandemic of the magnitude of COVID-19.
Shortly before the first wave of COVID-19 in Switzerland, northern Italy, a direct neighbor, experienced a severe overload of the healthcare system due to COVID-19, particularly of hospitals and intensive care units (ICU). This provided an alarming example to Swiss healthcare workers. The International Council of Nurses (2020) documented both the high rate of infection among healthcare workers in northern Italy, who then needed to be isolated outside of the workforce for 14 days, as well as the physical and mental exhaustion of them and their colleagues who were still/again in service. In mid-October 2020, as the second wave of COVID-19 infections had already emerged, the Swiss Society of Emergency and Rescue Medicine, Switzerland Emergency Care, and the Swiss Association of Paramedics together issued an open call to the Swiss government for support. They stated that the health of Swiss healthcare workers, which had already deteriorated due to the first wave, was at considerable risk of getting worse, if the government did not apply consistent measures across the entire country (SwissInfo.ch, 2020a).
Beyond these challenges, the pandemic has exposed the vulnerability of people, among them also healthcare workers, towards receiving flawed information through popular media, which may affect their judgment. The conveyed information may be imprecise or even misleading, and it may originate within media outlets themselves or merely be transmitted by them. The notion of vast flows of information on a "hot topic" coming from all kinds of sources, of which it may not always be clear to the reader/listener which are proven facts and which are opinions, is known as infodemics (Lexico dictionary, 2020). Filtering information by assessing its source is therefore a necessity, particularly for healthcare workers.
With the physical and mental health of healthcare workers being at stake, insight on their perspective and identification of their crucial challenges, as they perceive them, are greatly needed. It is a first step towards sensibly protecting them for their own sake, as well as for them to remain effective and efficient in their services, during a time when they are most needed by society. A rapid and effective response, as well as healthcare staff that is still able to take leadership, are pivotal in successfully handling the pandemic (see e.g., Nagesh and Chakraborty, 2020). Lessons from the first wave of the pandemic are therefore needed, and first-hand empirical data is key. This study presents a quantitative survey of Swiss healthcare workers (n = 185) conducted shortly after the first wave of the pandemic. Its aim is to provide evidence of their clinical knowledge about COVID-19, their emotional reaction, their adherence to preventive guidelines, and the impact on their work situation. For such insight to be accurately drawn, understanding the context is essential. Therefore, the circumstances under which the first wave impacted the healthcare workers need to be considered, which to a large degree depend on how the government and the healthcare system were prepared for and reacted to the pandemic.
A few recent studies have provided quantitative evidence of the knowledge of healthcare workers on COVID-19. Wahed et al. (2020) have studied Egyptian healthcare workers, showing that knowledge was higher among the more highly educated individuals, as well as among those below the age of 30 years. Zhang et al. (2020) in their survey of Chinese healthcare workers concluded that knowledge was sufficient in 89% of them. Honarvar et al. (2020) have provided evidence of the knowledge of the general public on certain COVID-19-related issues for the case of Iran. Similarly, Abdelhafiz et al. (2020) have assessed the knowledge of the Egyptian general population. To our knowledge, no study has been published so far specifically focusing on the clinical knowledge of Swiss healthcare workers and their media use. Our study therefore fills in this gap in the literature.
Several studies in the international literature have given insight on personal protective equipment (Park, 2020), specific work risks for healthcare workers related to COVID-19 (Ali S. et al., 2020), and psychological coping mechanisms (see e.g., Muller et al., 2020;Probst et al., 2020;Teo et al., 2020;Vagni et al., 2020). Further studies have shed light on risk perception and attitudes towards COVID-19 (see e.g., Führer et al., 2020;Hager et al., 2020;Honarvar et al., 2020;Zegarra-Valdvia et al., 2020). However, when considering risk perception and attitudes, many of the available studies refer to the general population instead of healthcare workers in particular. Exceptions are given as follows. Spiller et al. (2020), who focused specifically on a sample of Swiss healthcare workers, found no substantial changes in anxiety or depression over the course of the COVID-19 pandemic. Aebischer et al. (2020), who surveyed 227 resident medical doctors and 550 medical students through snowball sampling in Switzerland, found that those medical students who were involved in the COVID-19 response (30%) displayed higher levels of emotional distress than their non-involved peers, and lower levels of burnout compared to the residents. Dratva et al. (2020) analyzed Generalized Anxiety Disorder Scale-7 (GAD-7) in a sample of 2,429 Swiss university students, 595 of which (25%) were students of health professions. They found three classes of individuals regarding the perceived impact of the COVID-19 pandemic, with large differences in the odds of increased anxiety. They concluded that preventive/containment measures against COVID-19 had a selective effect on anxiety in students. However, these analyses were not differentiated across professions/fields, and therefore no results specific to healthcare workers or students of health professions were available. Puci et al. (2020) showed that the risk perception of getting infected with COVID-19 was high among Italian healthcare workers. They also reported sleep disturbances in 64% of the participants, and that 84% perceived a need for psychological support. Abolfotouh et al. (2020) in their survey of Saudi Arabian healthcare workers found that three in four respondents felt at risk of contracting COVID-19 at work, and that 28% did not feel safe at work given the available precautionary measures. Predictors of high concern were, among others, younger age, undergraduate education, and direct contact with patients. In a study of Ethiopian healthcare workers (Girma et al., 2020), risk perception due to the pandemic was measured by ten items on a five-point Likert scale. The mean score of perceived vulnerability was higher for COVID-19 than for the human immunodeficiency virus, the common cold, malaria, and tuberculosis. Wahed et al. (2020) studied a sample of Egyptian healthcare workers, finding that 83% were afraid of being infected with COVID-19. Therein, a lack of protective equipment, fear of transmitting the disease to their families, and social stigma were the most often named reasons. Two further studies are currently in their preprint phase: Firstly, Weilenmann et al. (2020) investigated mental health (depression, anxiety, and burnout) in physicians and nurses from Switzerland, considering work characteristics and demographics as explanatory factors. They concluded that support by the employer, as perceived by the physicians and nurses, was an important indicator of anxiety and burnout, while COVID-19 exposure was not strongly related with mental health. Secondly, Uccella et al. (2020) identified specific risk factors/groups among workers of public hospitals in Italy and Switzerland regarding psychological distress, such as being female and working in intensive care. Having both children and stress symptoms was associated with the perceived need to experience psychological support. Accordingly, while several studies are available regarding specific measures of psychological deterioration, such as anxiety or depression, and also regarding risk perception, quantitative evidence for the specific case of healthcare workers in Switzerland is still rare. Furthermore, the mentioned studies of risk perception referred to the situation at the time of the respective surveys during the pandemic, meaning that the available preventive measures and policies varied substantially. By contrast, the participants of our study were instructed to quantify the risk of COVID-19 independently of the specific precautionary measures that were in place at the time. That is, they answered for the scenario in which no other precautionary measures were taken during the first pandemic wave, other than the usual measures against common influenza. Albeit hypothetical, this allowed for a more general assessment of the threat imposed by COVID-19, making it more comparable to other health hazards.
The precautionary health behavior practices of Ethiopian healthcare workers were assessed by Girma et al. (2020) with a ten-item questionnaire. The items covered dimensions such as the frequency of wearing gloves or wearing a mask. Zhang et al. (2020) surveyed the implementation of four mandatory practices in hospitals among Chinese healthcare workers, concluding that 90% followed them correctly. Our survey contributes to the literature by using a different set of guidelines, which were legally non-binding and issued by the national government towards the general population. Thereby, the study covers the adherence of healthcare workers also in their private life, and is specific to the case of Switzerland.
Several studies have recently examined the responses to the COVID-19 pandemic in different countries. They adopted different perspectives, analyzing the effectiveness of governmental policies (Dergiades et al., 2020;Desson et al., 2020), epidemiological responses (Jefferies et al., 2020), testing, contact tracing and isolation (Salathe et al., 2020), lockdown policy (Faber et al., 2020), preparation of the healthcare sector (Barro et al., 2020), as well as key learned lessons (Han et al., 2020). However, empirical studies of how such measures are perceived by the healthcare staff, and of how the pandemic has affected their work situation from their own perspective, are still scarce. Spiller et al. (2020) compared two demographics-matched samples of healthcare workers, which were collected at two different points in time: at the height of the pandemic (T1) versus two weeks after the healthcare system had started its transition back to usual operations (T2). They found that working hours were higher at T1 compared to T2, and still higher at T2 compared to pre-pandemic levels. Uccella et al. (2020) found that healthcare staff working in intensive care experienced an increase in working hours. The study by Wolf et al. (2020) investigated the effect of policies such as the Swiss "lockdown" on dental practices and social issues such as unemployment and practice closures, assuming on a more economic perspective. Abolfotouh et al. (2020) found broad approval among healthcare workers of the following: the suggestion that the national government in Saudi Arabia should mandate the isolation of COVID-19 patients in specialized hospitals, travel restrictions within the country, and curfew. Our study contributes by providing evidence of how the work situation of healthcare workers had been impacted from their own perspective, and of how they perceived the measures that were implemented by the government.
This study provides insight on several psycho-social factors that in combination are relevant to the role of healthcare workers in the current pandemic. They are not specific psychological diagnoses or concepts of psychological deterioration like depression, anxiety, or burnout, but concern a broader spectrum of issues relevant to the mental wellbeing and the capability to act of healthcare workers. This supports policymakers in pragmatically fostering their comprehensive view of the situation, and in designing policies to sustainably protect the wellbeing of healthcare workers. In addition, the healthcare workers named the specific lessons that needed to be learned from their perspective when facing further pandemic waves.
---
MATERIALS AND METHODS
---
Study Setting
This cross-sectional survey was conducted from 16th June to 15th July 2020 with Swiss healthcare workers who regularly worked in direct contact with patients. The healthcare workers were also pursuing a professional development course at Careum Weiterbildung or had attended such a course within recent years. Careum Weiterbildung, situated in Aarau, is one out of several institutions in Switzerland offering extra-occupational courses of professional development (/vocational training) to healthcare workers. These courses vary in duration from 1 day to several days per month over several years and cover a broad range of practice-oriented topics and specializations within healthcare and social sciences. They are often multidisciplinary, and they are aimed at improving care by teaching methods of caregiving, knowledge of practical procedures, communication and organizational skills. Attending such professional development courses is highly common among healthcare workers of all specializations and hierarchical positions in the Swiss healthcare system. Participation was strictly voluntary and anonymous2 . According to Swiss regulations, no approval by an ethics committee was required for this study.
The participants were surveyed under the following circumstances: After the final day of the above-mentioned "lockdown" during the first wave in Switzerland on 26th April 2020 (see section "Introduction"), the preventive measures had been gradually eased by the national government (Neue Zürcher Zeitung, 2020b;Schweizer Radio und Fernsehen, 2020). From 27th April, businesses offering personal services with physical contact, such as hairdressers, beauty shops, and others, had been allowed to reopen, as well as florists and hardware stores (Federal Council, 2020b). From 11th May, primary and lover secondary school had resumed, and restaurants, markets (also others than food), museums and libraries had been allowed to re-open, along with sport events without physical contact (Federal Council, 2020c). From 28th May, religious events with larger groups of people could be held again (with a protection concept for the participants) (Federal Council, 2020d). From 6th June, private and public events with up to 300 people had been re-allowed, and touristic facilities (such as mountain railway, camping sites, etc.) could re-open. On 15th June, the borders with many countries within the EU/EFTA had been completely re-opened (SwissInfo.ch, 2020b). With the survey starting on 16th June, the participants answered the questionnaire after the first wave of COVID-19 had been overcome, and shortly after the government had relaxed preventive measures to a great extent.
---
Participants
All healthcare workers who were part of this study (n = 185) were directly attending to patients, with 22% (n = 40) of them either working with COVID-19 patients at the time of the survey or being scheduled to work with COVID-19 patients within the following 6 months. One in six individuals (17%, n = 31) indicated that because of their health condition, they themselves belonged to a risk group regarding COVID-19. The majority worked in a leading position (56%, n = 104) and roughly one in six had a technical lead position (18%, n = 33). They came from all major areas of the healthcare system, with 22% (n = 40) working in acute care (including psychiatric care), 54% (n = 100) in nursing homes, 16% (n = 30) in home care, and 12% (n = 22) in other areas such as rehabilitation and patient counseling 3 . The median age was 49 years, while the minimum was 23, and the maximum was 68. The vast majority were women (89%, n = 164). For further characteristics of the sample, see Table 1.
---
Data Collection
The data were collected by two-stage cluster sampling, inviting all current and recent attendees (past 8 years) of Careum Weiterbildung for voluntary participation in the survey.
A standardized online questionnaire was delivered to 1,747 attendees' addresses on 16th June via e-mail. 38.1% (n = 665) of the delivered messages were opened, and for 36.4% (n = 242) thereof the link to the survey was followed, as controlled by Mailworx software. A reminder was delivered to 1,684 attendees' addresses on 30th June, which was opened in 32.9% (n = 554) of the cases, and for 29.1% (n = 161) thereof the link to the survey was followed. A total of 194 participants completed the questionnaire, 185 of which directly attended to patients and therefore belonged to the population of main interest. Completion took 18.1 min at the median (minimum 9.3; maximum 54.6). The questions were posed with given answer options, predominantly in multiple-answer form, and some in multiplechoice form (As the only exception, the participants entered their age as an integer). Thereby, parts of the "Standard questionnaire on risk perception of an infectious disease outbreak" by the Municipal Public Health Service Rotterdam-Rijnmond and the National Institute for Public Health and the Environment (Voeten, 2015) were adapted to the case of the COVID-19 pandemic. The answer option "other" was frequently included which, if selected, led to a request for text input for specification by the participant. Questions were posed across the different parts of the questionnaire as follows. ( 1) Knowledge about COVID-19: The participants were presented with eight claims about COVID-19 as stated in Table 2 (labeled as items K1-K8). They were asked to choose for each claim whether it was correct, incorrect, or unknown to them (options "right"/"wrong"/"don't know"). The correct answers shown in Table 2 ("true" or "false" in parenthesis) were taken from the following sources: Day (2020) (K1); Mullard (2020) (K2); Morawska and Cao (2020) 2) those on which they needed more detailed information than they had at the time (for the precise wording of the question see Table 2).
(2) Sources of information and means of communication: A first multiple-answer question on who should provide them with the necessary information on COVID-19 (seven answer options, S1-S7), as well as a second multiple-answer question on how they preferred to receive this information (ten answer options, M1-M10), measured their preferred media use (see Table 3 for the precise wording). Furthermore, the participants rated their use of each of five given types of media (U1-U5) on a six-point Likert scale ranging from "daily" to "never" (see Table 4 for the precise wording). ( 3) Emotional distress and risk perception: The first question was "how worried do you feel because of the possibility of [the respective scenario]?" The three scenarios of "getting COVID-19 yourself, " "family/friends getting COVID-19, " and "numerous cases of death among elderly and sick people due to COVID-19" were each rated on a four-point Likert scale ranging from "very worried" to "not worried at all, " as listed in graph A of Figure 2. For the questions on risk perception, a hypothetical scenario was introduced by the wording "please answer for the scenario in which no extraordinary measures were undertaken in Switzerland other than the usual measures against influenza (i.e., no prohibition of social gatherings/events, no lockdown, no extraordinary measures in hospitals)." For this scenario, the question "would COVID-19 be a threat to. . ." was asked in the five specific respects of ". . .your own life?", ". . .the life of your family members or friends?", "health professionals attending to COVID-19 patients?", ". . .the Swiss population?", and ". . .the global population?". The answers were given on a four-point Likert scale ranging from "very serious threat" to "no threat at all, " as listed in graph B of Figure 2. As a follow-up, the identical questions were asked a second time, with the answers on a discrete rating scale as described by Studer and Winkelmann (2017). The discrete rating scale ranged from zero to ten, and only the extremes were verbally labeled ("0 = no threat at all;" "10 = very serious threat"). This allowed for the application of different methods of analysis, as described in the section "Data Analysis." (4) Perception of and adherence to preventive guidelines: The participants rated the likelihood of a second wave of COVID-19 in Switzerland before the end of 2020 on a six-point Likert scale ranging from "certainly" to "certainly not." They also rated the likelihood of a different pathogen causing another pandemic of equivalent or greater magnitude within the upcoming 20 years on the same scale.
Table 5 lists the precise wording of the question and the answer options. Note that for the intermediate levels of the Likert scale, the resulting frequencies are presented in cumulative form, as described in the section "Results." In the questionnaire, the Likert scale was included in typical fashion without cumulative meaning (i.e., no "≥" or "≤" signs). The participants repeated the assessment of the same two questions, but this second time with the answer options being on a discrete rating scale ranging from one to ten with only the extremes having a verbal label ("0 = certainly not;" "10 = certainly"). They were then shown six preventive guidelines (A1 and A3-A7 in Table 6). These guidelines were in place in Switzerland during the "lockdown" phase (with A3 and A4 formulated slightly less strictly/clearly), and some of them were relaxed afterwards. However, they had the status of recommendations by the federal government, not of legally binding rules. The participants indicated how strictly they followed them on a six-point Likert scale ranging from "always" to "never." The precise wording is given in Table 6. Like in Table 5, while the resulting frequencies for the intermediate levels are presented in their cumulative form, this was not the case in the questionnaire, where the ordinary Likert scale was used (without "≥" or "≤" signs). The participants were The six answer options were "certainly," "very likely," "rather likely," "rather unlikely," "very unlikely," and "certainly not;" "≥Rather likely" encompasses all individuals who answered "rather likely," "very likely," or "certainly;" "≤Rather unlikely" encompasses all individuals who answered "rather unlikely," "very unlikely," or "certainly not." "CI" stands for Wilson's confidence interval.
further asked to indicate how strictly they expected to follow the same guidelines in the future, as listed in the lower part of Table 6 (A11 and A13-A17). There, the six-point Likert scale ranged from "presumedly forever" to "0 to 1 month, " and the alternative option of "don't know" was added. To evaluate these guidelines, the participants were asked "which of the following claims apply to the above-mentioned guidelines?" referring to guidelines A1 and A3 through A7. They were presented with the multiple answer options "most of them are exaggerated for persons not working with patients or elderly people, " "most of them are exaggerated for persons working with patients or elderly people, " "most of them are ineffective, " and "none of the answers above apply." Finally, the participants indicated whether they currently had any plans of traveling abroad for private reasons before the end of the year 2020 (multiplechoice options "yes"/"no"/"undetermined yet"), and whether they would have had such plans if the COVID-19 pandemic had not occurred (see the precise wording in Figure 3). (5) impact on work situation: For each of four claims regarding preparation (P1-P4 as shown in Table 7) it was asked whether the claim was true or not. By item P5 the choice was offered that none of the claims P1 through P4 were true, which, if chosen, implied that P1 through P4 could not be selected as well. The question "how has/had COVID-19 affected your work situation?" was then asked with eleven answer options (W1-W11 as listed in Table 7) of which the last option excluded all other ten. ( 6) Reaction by the government: The sentence "the measures implemented by the government between 17th March and 26th April ("lockdown") were. . ." could be completed with either ". . .exaggerated, " ". . .adequate, " or ". . .not strict enough / too late / too short in duration." The follow-up question was "which of the following claims applies to the gradual steps of relaxation of these measures, which are in place since 27th April and which are planned for the future?". The multiple-choice answer options were "the measures should have been relaxed earlier / more strongly, " "the relaxation plan is adequate, " and "the measures should have been relaxed later / less strongly." (7) Key lessons: The question "which lessons need to be learned and what should be different in case another pandemic should happen in the future?" was asked with ten answer options (L1-L10 as listed in Table 7) of which the last one excluded all other options. ( 8) Presumed cause of the pandemic: The participants were presented with a multiple-choice question phrased as shown in Figure 4. At the end of the questionnaire, the participants could enter any comments, regardless of their previous answers.
---
Data Analysis
Confidence intervals (CIs) of proportions, as shown in Table 2 through Table 7, as well as referred to in the text of the "Results" section, were calculated by Wilson's method (for a comparison of methods, see Newcombe, 1998). Fisher's exact test was used for testing the equality of proportions (see section "Emotional Distress and Risk Perception"). Pair-wise rank correlation was calculated by Spearman's method (see Table 8) and classified according to Cohen (1992). For any tests of hypotheses, whether univariate or within a multiple regression model, a type-one error probability (p) < 0.05 was considered as "statistically significant."
In the same regard, alternative hypotheses were two-sided. By binary logistic regression, the effects of multiple predictors on a binary outcome were modeled. The results were computed as average marginal effects (AME) representing percentage-point differences in the probability of the outcome being positive. By fractional logistic rating scale regression, the effects of multiple predictors on an outcome on an eleven-point discrete numeric rating scale (0-10, with labeled extremes) were modeled. The results were represented as AME representing differences on the 0-10 scale. For an explanation of this method, see e.g., Studer and Winkelmann (2017). Each regression model was optimized such that systematic factor elimination minimized Bayes' information criterion (BIC) 4 . The following models were 4 The initial set of predictors for which factor elimination was performed comprised the following items, for which one-sided causality could be assumed : W2 through W5 (see a Items A2 and A12 of the questionnaire were not included in this survey. b "Don't know" was not given as a response option for items A1-A7. For items A1-A7, the six answer options were "always," "almost always," "predominantly," "sometimes," "almost never," and "never;" "≥Predominantly" encompasses all individuals who answered "predominantly," "almost always," or "always;" for items A11-A17, the seven answer options were "presumedly forever," "until vaccine available," "7 to 12 months," "4 to 6 months," "2 to 3 months," "0 to 1 month;" "≥Until vaccine available" encompasses all individuals who answered "until vaccine available" or "presumedly forever;" "≥2 to 3 months" encompasses all individuals who answered "2 to 3 months, ""4 to 6 months," "7 to 12 months," "until vaccine available," or "presumedly forever;" "0 to 1 month" encompasses all individuals who answered "0 to 1 month;" "CI" stands for Wilson's confidence interval. estimated for the different parts of the questionnaire. (1) Knowledge about COVID-19: A binary logistic model of item K4 (Table 2) being answered correctly (versus wrongly or by the answer option "don't know"). ( 3) Emotional distress and individual part of a COVID-19 risk group, answered the questionnaire before 20th June 2020 (see the final paragraph of the "Data Analysis" section for explanation). In some cases, minimization of BIC led to a reduction of the model to a single predictor, as reported in the "Results" section. risk perception: Three binary logistic models, one for each of the three dimensions depicted in graph A in Figure 2, of the respective outcome being at least "worried" (i.e., ("worried" or "very worried") versus ("a little worried" or "not worried at all")). A fractional logistic model of the perceived threat to one's own life on the 0-10 discrete rating scale, as well as another fractional logistic model of the perceived threat to the life of family members and friends on the same scale. (4) Perception of and adherence to preventive guidelines: Three binary logistic Participants were asked "will you travel abroad for private reasons before the end of 2020?" and "would you have traveled abroad for private reasons before the end of 2020 if the COVID-19 pandemic had not occurred?", respectively. models, one each for the items A1, A3, and A4 (Table 6), of the respective outcome being at least "almost always" (i.e., "almost always" or "always" versus all other answer options). Three binary logistic models, one each for the items A13, A14, and A15 (Table 6), conducted for those participants who claimed to adhere to the respective guideline at least "predominantly" at the time of the survey (as measured by items A3, A4, and A5). Thereby, the probability of continuing the individual level of adherence at least until a vaccine would be available was modeled (i.e., "until vaccine available" or "presumably forever" versus all other answer options, except for "don't know" in which case the respective individual was excluded). A binary logistic model of currently having plans of traveling abroad before the end of 2020 given the pandemic, as described in Figure 3 (i.e., "yes" versus the other two answer options). ( 6) Reaction by the government: A binary logistic model of the question "which of the following claims applies to the gradual steps of relaxation of these measures, which are in place since 27th April and which are planned for the future?" being answered by "the measures should have been relaxed later / less strongly" (versus the other two answer options). For each of these BIC-optimized models, all of the predictors and their estimated effects are reported in the "Results" section.
One of the tested predictors in the above-mentioned models concerned a specific public announcement by the Swiss Federal Council, which requires specific explanation. It was made shortly after the start of the survey: During the day of 19th June 2020, the Federal Council announced that most of the national preventive measures in place at that time would be abolished or relaxed on June 22nd. In particular, organized events with up to 1,000 people would be legalized again, the recommended physical distance between people would be reduced from 2 to 1.5 meters, masks would not be mandatory in public transportation (yet recommended), and home office would no longer be a recommendation (Federal Council, 2020e). The Federal Council further announced that the handling of a potential second wave would be the duty of the Swiss cantons, which are the member states of the Swiss Federation. It thereby undertook a fundamental change of policy, which it underlined by suspending the national coronavirus task force (KSBC). Notably, these steps were not known to the broad public before 19th June. Hence, the government's future plans changed on the 19th of June to being significantly more liberal than before, as far as public knowledge is concerned. From 16th June until 19th June, 107 of the total of 185 participants had already answered the survey. Naturally, by the time the survey had started on 16th June, no question specifically referring to the announcement of 19th June could have been included in the questionnaire. For reasons of consistency, the questionnaire was not altered after the start. Therefore, the day of participation in the survey (i.e., whether it was after 19th June or not) was used as a predictor of the answer to whether the participants agreed with the steps of relaxation "undertaken since 27th April and planned for the future" (see section "Reaction by the Government").
---
RESULTS
---
Knowledge About COVID-19
Knowledge was high regarding the unavailability of a COVID-19 vaccine (item K2), the ineffectiveness of influenza vaccines against COVID-19 (K8), the occurrence of symptoms (K1), and transmission without physical contact (K3), with over 92% (confidence intervals (CIs) over 87%) answering correctly (see Table 2). 76% of the participants answered correctly that COVID-19 was more infectious (K5) and 72% that it had a longer incubation time (K6) than common influenza. 69% correctly indicated that COVID-19 cases more often had a life-threatening disease progression than common influenza (K7). However, 36% (CI 29-43%) falsely believed that if hygiene standards such as frequent washing of hands and sneezing only into tissues were met, an infection with COVID-19 would be virtually impossible. Another 7% (CI 4-12%) answered that they did not know the answer to this question. Hence, knowledge on the latter item (K4) was significantly lower than on any other tested item. It was even lower among participants who as a result of the pandemic worked more hours than usual (AME = -17.7 percentage points, p < 0.05, binary logistic regression).
Additional information on treatment was most frequently desired (43%, I7 in Table 2), followed by incubation time (34%, I2), severe disease progression (29%, I6), infectiousness (27%, I5), transmission between people (15%, I1), preventive measures (13%, I4), and symptoms (11%, I3). 28% (CI 22-35%) claimed not to be needing any further information on COVID-19-related topics (i.e., none of the items I1 through I8 were selected).
Even though knowledge was comparably low regarding the effectiveness of standard hygiene (K4), the topics of preventive measures (I4) and transmission (I1) were rarely named as topics for which further information was perceived to be needed. In fact, among those participants who did not provide the correct a Items A2 and A12 of the questionnaire were not used in this survey. Top row within each cell shows pairwise correlation of items A1-A7, referring to present adherence at the time of the survey (June 16th until July 15th 2020); Bottom row within each cell (italics) shows pairwise correlation of items A11-A17, referring to expected duration of adherence (for the n = 95 individuals who did not answer "don't know"); The meaning of the items is listed in Table 6. *p < 0.05, **p < 0.01, ***p < 0.001.
answer to this item (K4) (n = 79), 85% (CI 75-91%) claimed to be needing no further information on preventive measures (I4), and 86% (CI 77-92%) claimed to be needing no further information on transmission between people (I1). Similar results were found for other topics: Of the participants who did not answer correctly on life-threatening disease progression (K7) (n = 58), 74% (CI 62-84%) claimed to be needing no further information on the topic (I6). Of the participants who did not answer correctly on incubation time (K6) (n = 51), 45% (CI 32-59%) claimed to be needing no further information on the topic (I2). Of the participants who did not answer correctly on infectiousness (K5) (n = 45), 73% (CI 59-84%) claimed to be needing no further information on the topic (I5). This is clear evidence that, although knowledge was fairly high on some topics, many participants overestimated their knowledge (or for other reasons thought that no further information was needed).
---
Sources of Information and Means of Communication
The vast majority of the participants (81%) expected the government to be their source of necessary information on COVID-19, as shown in Table 3 (S4), while 63% (also) wished for scientists/universities (S6), and 61% (also) wished for their employer to take on that role (S1). Any other sources were significantly less often named. The most preferred means of communication by which to receive the information were public television (75%, M3), radio (66%, M6), and newspaper articles (57%, M5). Of those participants who wished to receive the information by their employer (n = 112, S1), 93% (CI 87-96%) required to receive it in writing (M9), and only 27% (19-36%) orally (also) (M8). Accordingly, television (72%, U3) and radio (72%, U4) were the most popular media in order to keep informed ("several times a week" or "daily") on recent news in general, not only related to COVID-19 (see Table 4). Still, more than half of the participants read articles by daily newspapers at least "several times a week" (54% for newspapers requiring subscription, U1; 56% for free newspapers, U2). News automatically suggested by web browsers (U5) were significantly less popular than the other mentioned media.
---
Emotional Distress and Risk Perception
Merely 18% (CI 13-24%) of the participants felt at least worried (i.e., "worried" or "very worried") about getting infected with COVID-19 themselves (see graph A in Figure 2). By contrast, 52% (CI 44-58%) felt at least worried about possibly the same happening to their family/friends. 60% (CI 53-68%) felt at least worried about the possibility of numerous deaths among elderly or sick people (people not necessarily personally known to them). Hence, the participants were significantly more often at least worried (i.e., "worried" or "very worried") about other people being at risk than about themselves (p < 0.001, for both bivariate comparisons, Fisher's exact test). Participants working in long-term care were more likely to feel at least worried (i.e., "worried" or "very worried") about contracting COVID-19 themselves (AME = 0.335, p < 0.05, binary logistic regression), participants who had passed the majority of their education in Germany were more likely to feel at least worried about their family/friends contracting it (AME = 0.263, p < 0.01), and both participants working in somatic care (AME = 0.258, p < 0.001) and participants working in nursing homes (AME = 0.284, p < 0.001) were more likely to feel at least worried about deaths among elderly or sick people.
The provided answers on how severe of a threat COVID-19 was for specific groups are illustrated by graph B in Figure 2. This pertains to the hypothetical scenario without precautionary measures because of COVID-19 other than the usual ones against a common flu ("business as usual"). 90% (CI 85-93%) claimed an at least serious (i.e., "serious" or "very serious") threat for the global population, and 86% (CI 81-91%) claimed so for healthcare workers who directly attended to COVID-19 patients. 85% (CI 80-90%) claimed an at least serious threat for the Swiss population, and 76% (CI 69-81%) claimed so for the life of their family members and friends. Only 49% (CI 42-56%) claimed an at least serious threat for the global population. Again, a pattern showed according to which the participants significantly more often saw other groups than themselves as threatened (p < 0.001, for all four bivariate comparisons, Fisher's exact test), which is analogous to the observed pattern of emotional distress. The results of the assessment on the discrete 0-10 rating scale were consistent with those on the Likert scale. The proportion of participants who estimated a strictly lower threat of COVID-19 to their own life was 65% (CI 59-73%) compared to the global population, 64% (CI 56-71%) compared to healthcare workers directly attending to COVID-19 patients, 57% (CI 50-65%) compared to the Swiss population, and 51% (CI 44-59%) compared to their own family and friends. Vice versa, the proportion of participants who estimated a higher threat to their own life than to another group was a single-digit percentage (for any of the four comparisons). Furthermore, 38% (CI 31-46%) claimed that there was a greater threat to the global population than to the Swiss population, and only 4% (CI 2-8%) claimed vice versa. The observation that healthcare workers who directly attended to COVID-19 patients were predominantly estimated to be more threatened than one's own life calls for closer consideration. It applied even among those participants who themselves attended to COVID-19 patients (n = 40, therein: 58% with CI 41-73%; vice versa 3% with CI 0-13%). This is remarkable, as the majority therein claimed a lower threat for themselves individually than for others, even though they belonged to the very group they were comparing themselves to. While this may appear somewhat paradoxical at first glance, it is another occurrence of the above-mentioned pattern, this time within the group of their peers. Participants who themselves were part of a risk group regarding COVID-19 because of their health condition estimated the threat to their own life to be higher (AME = 2.43 points, p < 0.001, with a mean outcome over all individuals of 5.47 points on the 0-10 scale), which is unsurprising (as derived by the fractional logistic regression model). The same participants also estimated the threat to the life of their family members and friends to be higher (AME = 1.31 points, p < 0.001, with a mean outcome over all individuals of 6.80 on the 0-10 scale).
---
Perception of and Adherence to Preventive Guidelines
Table 5 tabulates the cumulative distribution of the perceived likelihood of a second wave of COVID-19 and of another pandemic in the future. Note that this is the cumulative distribution over the Likert scale, which is split in its middle such that the left side of the table cumulates frequencies from high to low likelihoods, starting on the left with the highest ("certainly"), and the right side of the table cumulates frequencies from low to high likelihoods, starting from the right with the lowest ("certainly not"). 78% (CI 71-83%, F1) estimated a second wave of COVID-19 to be at least rather likely (i.e., "rather likely, " "very likely, " or "certain"), and 89% (CI 83-93%, F2) estimated such a likelihood of another pandemic in the future. On the discrete 0-10 rating scale, 39% (CI 32-47%) estimated the likelihood of another pandemic (with another pathogen) to be strictly higher than that of a second wave of COVID-19. Vice versa, only 23% (CI 17-30%) estimated the likelihood of a second wave of COVID-19 to be strictly higher.
Table 6 shows how strictly the participants claimed to be following certain preventive guidelines at the time of the survey (A1-A7 in Table 6). Like Table 5, the upper part of Table 6 is split in its middle, such that the left side of the table cumulates frequencies from high to low likelihoods, starting on the left with the highest ("always"), and the right side of the table cumulates frequencies from low to high likelihoods, starting from the right with the lowest ("never"). Strict adherence (answer option "always") was most frequent regarding coughing and sneezing only into a tissue or the inside of one's own elbow (89%; 97% at least "almost always;" A6), not shaking hands (82%; 96% at least "almost always;" A5), and not leaving home in case of a cough or fever and contacting the hotline or a physician via phone (81%; 89% at least "almost always;" A7). 56% (75% at least "almost always") claimed to always refrain from public transportation during rush hour (A1), while 8% did not refrain from public transportation during rush hour at all. 36% (67% at least "almost always") disinfected or washed their hands with soap after each physical contact (except with family, A4). Only 8% were able to always (50% at least "almost always") keep a physical distance of at least two meters all the time (except their closest family, A3), which is not surprising, given that all of the participants regularly worked with patients. For each of the five covered preventive guidelines, the proportion of participants who followed them at least "predominantly" lay above 80% (CIs above 74%). Participants in leading positions were more likely to refrain from public transportation during rush hour (at least "almost always, " AME = 18.5 percentage points, p < 0.01, binary logistic regression), participants living by themselves were less likely to keep a physical distance of two meters from people except their closest family (at least "almost always, " AME = -33.7 percentage points, p < 0.001), and participants who were part of a risk group regarding COVID-19 because of their health condition were more likely to disinfect or wash their hands with soap after each physical contact (excepting their family) (at least "almost always, " AME = 19.4 percentage points, p < 0.05).
The lower part of Table 6 shows for how long the participants expected to continue to follow the guidelines with the same intensity in the future, that is, following the survey. The following proportions of participants expected to continue indefinitely or until a vaccine would be available: 92% with coughing and sneezing only into tissue or inside their elbow (A16), 55% with disinfecting or washing their hands with soap after each physical contact (except with family, A14), 47% with not leaving home in case of a cough or fever and contacting the hotline or a physician via phone (A17), 45% with not shaking hands (A15), 35% with not using public transportation during rush hour (A11), and 31% with keeping a physical distance of at least two meters from everyone except their closest family (A13). While not leaving home in case of a cough or fever and not shaking hands were both followed with high adherence at the time of the survey, roughly half of the participants expected to keep it up for a year or less only, and to not necessarily wait until a vaccine would be available. These two guidelines concern socially and culturally relevant behaviors. Staying at home may be perceived as an act of social isolation, depending on the situation, and shaking hands is a common gesture of greeting in Switzerland. Refusing an offered handshake without providing a reason, such as a health hazard, can be considered as a sign of disrespect. The analysis of those participants who claimed to adhere to the guidelines at least "predominantly" at the time of the survey showed that participants of age 45 to 54 were more likely to continue keeping a physical distance of two meters until a vaccine would be available (AME = 24.6 percentage points, p < 0.01), and that participants of age 55 and above were even more likely to continue keeping a physical distance of two meters (AME = 42.0 percentage points, p < 0.001), with both age groups being compared to participants of age below 45. Furthermore, participants who had passed the majority of their education outside of Switzerland were more likely to continue disinfecting or wash their hands (AME = 27.5 percentage points, p < 0.01). Finally, participants of age 55 and above were more likely to continue not shaking hands (AME = 25.6 percentage points, p < 0.01), and participants who answered the survey on 20th June or later (see section "Data Analysis" for explanation) were more likely to continue not shaking hands (AME = 27.5 percentage points, p < 0.01).
Table 8 lists the pair-wise rank correlation of the reported adherence to the guidelines. Within each cell of the table, the upper coefficient refers to adherence at the time of the survey (A1-A7), and the lower coefficient refers to continued adherence in the future following the survey (A11-A17). Correlation across the different guidelines was rather low at the time of the survey. Even though mostly significantly different from zero, the effects were of small or moderate size according to the classification by Cohen (1992), except for the two pairs of A3/A4 and A3/A5. This means that an individual typically did not follow all guidelines to a uniform extent, but instead differentiated between the guidelines, and followed some of them more strictly and others less strictly. By contrast, correlation was high among continuation in in the future. Here, the effects were mainly strong, with coefficients up to 0.707, and only a few of them were moderate (those involving A16, which is the dimension with the highest expected future adherence by a large margin). Hence, an individual typically differentiated her/his behavior across the guidelines initially, and then intended to continue the pattern for a certain duration, without strongly readjusting it over time by relaxing on a part of the guidelines earlier than on others. Please note that the correlations regarding continuation in the future (A11-A17) were calculated for the subsample of the 95 participants who did not answer with "don't know." If the correlations regarding adherence at the time of the survey were computed for the same subsample (n = 95), the effects were even smaller than the ones shown in Table 8 (all but two of them).
Of the mentioned preventive guidelines (as listed in Table 6), two participants (2%, CI 1-5%) claimed that "most of them are exaggerated for persons working with patients or elderly people, " and 14% (CI 9-19%) claimed that "most of them are exaggerated for people not working with patients or elderly people."
Figure 3 depicts the participants' plans of traveling abroad before the end of the year 2020. Had the pandemic not emerged, 83% (CI 76-87%) would have traveled abroad. Given the pandemic, only 31% (CI 25-38%) still had plans of traveling abroad at the time of the survey. Unsurprisingly, participants who had passed most of their education in Germany (rather than in Switzerland) were more likely to still have plans of traveling abroad given the pandemic (AME = 44.2 percentage points, p < 0.001, binary logistic regression). One participant commented that she/he had elderly relatives abroad and therefore had to follow a "familial obligation."
---
Impact on Work Situation
Table 7 shows the participants' assessment of the initial preparation for a viral pandemic before the outbreak (items P1-P5), how COVID-19 had affected their work situation (W1-11), and which lessons should be learned from its first wave (L1-L10). The participants largely indicated that before the COVID-19 pandemic had broken out, the preparation by the government and the healthcare sector for a viral pandemic had been insufficient. 91% deemed preparation insufficient regarding the availability of disinfectant and protective masks (P1), 86% regarding personnel (P2), 77% regarding structures (P3), and 70% regarding processes and contingency plans (P4). More than half of the participants (58%, CI 51-65%) claimed that in none of these four areas preparation had been sufficient (P5).
Following the outbreak, 44% of the participants felt more stressed than usual because of the pandemic (W1 in Table 7). 38% worked unusual tasks as a result of the COVID-19 pandemic (W4), and 32% worked more hours than usual (W2). 28% indicated that not all materials and structures necessary to effectively protect the healthcare staff from an infection with COVID-19 were available (W7), and 19% thought that not all the decisions necessary to do so were being taken (W8), respectively. 92% (CI 88-95%) of the participants reported multiple effects of the pandemic on their work situation (W1-W10). Only one participant concluded that the first wave of the pandemic had no effect on her/his work situation at all (W11). If a participant selected the item labeled "other" (W10), they were asked to specify these other effects. Among these text answers (n = 18), the most frequently mentioned issue was the handling of visitors of patients (four mentions), which grew more challenging due to more restrictive preventive measures and visitor hours, as well as due to visitors not abiding to them and even verbally abusing the staff. Three participants again emphasized a severe lack of protective equipment, one of them described "chaotic" circumstances, in which masks had been forbidden to be used by nurses until the first confirmed case had occurred within the institution, and with no measures of isolation afterwards. Three times it was claimed that wearing the protective material, particularly masks, made work more difficult or more exhausting. Three reports were given of increased psychological strain among the staff and the patients. Another three statements were made that organizational challenges were high, because changes needed to be implemented within very short time and without a test run. Single mentions were the introduction of tracking, a lack of personnel, economical aspects dominating the healthcare system, and employers threatening employees with consequences in case they should introduce COVID-19 into the institution. One participant reported to actually have less work because fewer patients were present in her/his institution due to the pandemic.
---
Reaction by the Government
The vast majority of 72% (CI 65-78%) found the preventive measures implemented by the federal government between 17th March and 26th April 2020 (i.e., the "lockdown" during the first wave) to be "adequate." Another 17% (CI 13-23%) found them to be "not strict enough / too late / too short in duration, " and 10% (CI 7-15%) found them to be "exaggerated." 56% (CI 48-63%) concluded that the relaxation schedule from 27th April onward was "adequate, " while 32% (CI 26-39%) would have preferred the preventive measures to be relaxed "later / less strongly, " and 11% (CI 8-17%) claimed that the measures should have been relaxed "earlier / more strongly." The above-mentioned date of 19th June (see section "Data Analysis") was predictive of the evaluation the participants made. Participants who completed the survey after that date were significantly more likely to deem the relaxation plan as too liberal (i.e., relaxation should be done "later / less strongly"), compared to participants who completed the survey up to 19th June (AME = 0.281, p < 0.001, binary logistic regression). In addition, participants who had children were less likely to evaluate the relaxation plans as too liberal (AME = -0.185, p < 0.01), and participants who had passed the majority of their education in Germany were more likely to evaluate them as too liberal (AME = 0.285, p < 0.01).
---
Key Lessons
More than half of the surveyed healthcare workers (58%, CI 51-65%) claimed the need for more/better medical equipment (including drugs) than it was available during the first wave of the COVID-19 pandemic (L4 in Table 7). 40% required better protection of their own physical health (L7), and even 44% called for better protection of their mental health (L8). 37% asked for more (assigned) personnel (L2). 37% thought that hourly wages should be higher due to the exceptional circumstances (L6). 36% required more detailed/accurate information about the COVID-19 symptoms (L3), and 32% called for an earlier warning next time (L1). Only 14% indicated that the work schedule should be left unchanged due to the pandemic ("business as usual, " L5). 7% claimed that no lessons needed to be learned, as preparation for and handling of the pandemic had been appropriate in their view (L10).
---
Presumed Cause of the Pandemic
Half of the participants (54%, CI 46-61%) identified negligent behavior of humans towards animals/nature as the cause of the COVID-19 pandemic, as depicted in Figure 4. Six participants (3%, CI 1-7%) concluded that it was instead a willful transfer to humans as a biological attack. Among "other causes" (4%, CI 2-8%), mutation of SARS, improper hygiene in the food sector, politics, economics, overpopulation of the planet and overconsumption of natural ressources, ignorance, and denial were specified.
---
DISCUSSION
---
Key Findings
This survey explored the knowledge of Swiss healthcare workers on COVID-19, how the first pandemic wave impacted their work situation, and how they reacted both emotionally and regarding their adherence to preventive guidelines.
Assessed after the first wave of COVID-19 had been overcome, clinical knowledge of COVID-19 was high among healthcare workers on several main topics, but not on all of them. In particular, a large proportion (more than a third) overestimated the effectiveness of standard hygiene (namely frequent washing of hands and sneezing into tissues) as a regime that would virtually exclude any transmission of COVID-19. This proportion was even higher among those who had worked more hours than usually during the pandemic. This misjudgment was prevalent, despite most of the respective healthcare workers knowing that COVID-19 was not only transmitted via physical contact. Also, and this may be critical, the vast majority of them nevertheless believed not to be needing any further information on the topics of preventive measures and transmission. Another topic where knowledge was limited, however to a lesser degree, was the comparison of COVID-19 with the common flu regarding infectiousness, incubation time, and life-threatening disease progression. Again, a pattern showed according to which the majority of those participants who did not provide the correct answer believed not to be needing any further information (except for incubation time, where the proportion was slightly smaller than half). This clearly shows that even after the first wave of the pandemic, healthcare workers had still not received comprehensive or uniform education on certain essential topics. It also reflects the circumstance that COVID-19 had not only been present in media of specific focus and readership, such as scientific media from which to be absorbed by the healthcare institutions, but that it had also been dominating the popular media since shortly after the outbreak. In this ever-present flow of information from most heterogeneous outlets, the distinction of scientific facts, or also a lack of scientific facts when it was the case, from speculation and opinion became significantly more challenging (see e.g., notion of infodemics, Lexico dictionary, 2020). This raises the question of by whom, and through which processes, the provision of comprehensive and uniform clinical information to healthcare workers can and should be ensured when managing a pandemic of global relevance. According to the healthcare workers, they most often expected the government to provide them with the necessary information, followed by scientists/universities, and their employer. Any other possible sources (e.g., journalists) should play a smaller role according to them. They preferred to receive the information by public television (and to a slightly lesser extent by radio and newspaper articles). In case the employer should provide them with according information, they had a clear preference for it to be in writing rather than orally.
The healthcare workers reported considerable emotional distress caused by the pandemic, with more than half of them feeling worried about their family or friends possibly getting infected, and about numerous deaths among elderly and sick people, respectively. About one in five reported to be feeling very worried because of these possibilities, while less than ten percent were not worried at all. By contrast, they were significantly less worried about themselves possibly contracting the disease. They were also asked to estimate the threat COVID-19 posed to different groups, irrespective of preventive measures, meaning for the hypothetical case in which no other precautionary measures would have been taken than the usual ones against the common flu. Again, they were significantly more concerned about the global and Swiss population than about themselves. Interestingly, they were also significantly more concerned about healthcare workers working with COVID-19 patients than about themselves. The latter was true even among healthcare workers who themselves attended to COVID-19 patients. While this finding may appear as a paradox, it is in line with the repeating pattern of them being more worried/concerned about others than about themselves, even if they are in the same situation. Even though this manifests as an altruistic trait, which may be lauded as "heroic" by society or patients (Cox, 2020), it ought not to be forgotten that this attitude serves the short-term interest of the patients, but could be detrimental to the physical and mental health of the healthcare worker.
The vast majority of the healthcare workers (three in four) estimated another wave of COVID-19 in Switzerland, after the first one that took place in March/April 2020, to be "rather likely." A different pathogen causing another pandemic of equivalent or greater magnitude than COVID-19 within the next 20 years was considered to be even more likely. This provides the relatively clear picture that healthcare workers expected global pandemics to repeatedly be a part of human society in the future, and not a once-in-a-lifetime event.
The self-reported adherence to preventive guidelines was such that at least four in five healthcare workers followed them at least "predominantly." The guidelines of refraining from shaking hands, no uncovered coughing or sneezing, and staying at home in case of a cough or fever, were followed strictly (meaning "always") by at least four in five healthcare workers. All of the tested guidelines were official recommendations by the Swiss government during the "lockdown" phase of the first wave (however not legally binding, and relaxed after the "lockdown"). Interestingly, the pair-wise correlation across these guidelines was insignificant to moderate (with two exceptions), meaning that most healthcare workers displayed a pattern in which they did not follow all guidelines with the same commitment. Only between roughly a third and half of the healthcare workers expected to continue their pattern of adherence until a vaccine would be available in case that this would take longer than a year. This excluded the guideline of only covered coughing and sneezing, where the overwhelming majority expected to keep their adherence until a vaccine would be available (without a time limit). With increasing age, healthcare workers were more likely to expect to keep their adherence to both social distancing (two meters) and hand hygiene for a longer period of time. After eight in ten healthcare workers had plans of traveling abroad before the pandemic emerged, three in ten still kept such plans after the first wave.
The overwhelming majority of the healthcare workers stated, that the preparation by the government and the healthcare sector for a viral pandemic had been insufficient at the time COVID-19 emerged, especially regarding the availability of disinfectant and protective masks (nine in ten), but also clearly so regarding personnel (six in seven), structures (four in five), processes, and contingency plans (seven in ten). The majority even claimed that preparation had been insufficient in all of these areas. It is therefore not surprising that the reported effects of the pandemic on the work situation of the healthcare workers were rather diverse. Roughly one in three had worked more hours than usual. This finding was confirmatory of Spiller et al. (2020), who further found that hours worked were sluggish in converging back to previous levels. Even before the pandemic, excessive labor of healthcare workers had been an often-discussed topic in the literature, particularly regarding its effect on psychosocial function, productivity, and working errors in an industry, where the margin for error often is small (see e.g., Caruso, 2006;Griffiths et al., 2014). Another one in three healthcare workers had worked usual tasks. One in four reported that not all materials and structures necessary to effectively protect the healthcare staff from an infection with COVID-19 were available during the first wave. One in six (each) were more pressed for time, had an employer showing less consideration for their needs than usual, or observed a relevant share of nurses not strictly abiding to the hospital-/institution-specific regulations regarding protective masks, washing of hands, and physical distancing, respectively. Further, less frequently named effects were working for another department/division, challenging situations with visitors of patients due to increased precautionary measures (and some visitors not abiding and even being verbally abusive), physical exhaustion due to wearing a mask while working, increased pressure by the employer, increased psychological strain, and implementing new processes within short time and without testing. The most frequently reported effect, however, was an increase in emotional stress level as a result of the COVID-19 pandemic (almost half of the healthcare workers).
The vast majority of the healthcare workers found the reaction by the Swiss government, specifically the "lockdown" during the first wave, to be adequate, while one in six found it to be not restrictive enough (or too late/short), and one in ten found it to be exaggerated. The relaxation plan following the "lockdown" received significantly less approval, with one in three healthcare workers claiming that the preventive measures should have been relaxed later (or less strongly), and one in ten claiming the opposite. The policy change announced by the national government on 19th June, according to which many restrictive measures would be relaxed or abolished, the national coronavirus taskforce (KSBC) would be suspended, and the management of further pandemic waves in the future would be mainly the duty of the cantons, was deemed as too liberal by a significant proportion of healthcare workers. A similar result showed in the analysis of their adherence to preventive guidelines, in which the healthcare workers who participated in the survey after this change of policy were significantly more likely to expect to continue not shaking hands at least until a vaccine was available, compared to healthcare workers who had participated before this change of policy.
---
Lessons to Be Learned
Key lessons were drawn which should be learned according to healthcare workers themselves. They should be seen as recommendations for the management of further pandemic waves which have recently developed in Switzerland and many other countries.
According to the surveyed healthcare workers, the lesson most often claimed as needed to be learned was the requirement of more/better medical equipment (including drugs) than during the first wave. This again reflects the lack of protective materials at the beginning of (and also during) the first wave in Switzerland, as well as the globally ongoing efforts in research for vaccination and therapeutics. This can be seen as the first aim of improvement according to healthcare workers. While their personal physical and mental wellbeing, as well as their ability to fulfill their tasks effectively and efficiently, are affected by other factors as well, progress towards this first aim can be expected to yield most significant improvement. The healthcare workers' second priority was better protection for their own mental and physical health (with mental health being named more frequently, however with a statistically insignificant difference compared to physical health). A proportion of more than four in ten stated this need. This is in accordance with the above-mentioned group of medical organizations, which together recently issued an open call to the Swiss government for support in order to prevent further deterioration of the state of Swiss healthcare workers (see section "Introduction"). In addition to practical challenges, a viral pandemic can cause a moral dilemma of being responsible for patients, but thereby also risking getting infected and infecting others, which may impose additional mental and emotional strain and even affect decision-making. Irrespective of the COVID-19 pandemic however, the literature has suggested that healthcare workers find themselves in a difficult industry, as far as emotional, communicational, and decision-making challenges are concerned (see e.g., Wulf, 2012;Joseph and Joseph, 2016), which can be psychologically depleting. In this sense, the COVID-19 pandemic can be seen as an event which has not only caused new challenges for healthcare workers, but which has also emphasized shortcomings that were prevalent beforehand. Solutions therefore should address both the pandemic-specific as well as the underlying long-term challenges of the industry. The third lesson was the need for more personnel to be available (and assigned) to handling the pandemic, as well as increased hourly wages during the exceptional circumstances. It needs to be kept in mind that during a pandemic, healthcare workers getting infected themselves is a twofold risk, as it not only threatens the health of the individual, but also isolates her/him from the workforce at least for a period of quarantine. Fourthly, more detailed information about the symptoms of the disease was required, as well as a system of earlier warning in order to provide room for preparation. Each of these lessons were named by more than three in ten healthcare workers (some significantly more). Nevertheless, there was a small minority of healthcare workers (one in fifteen), who claimed that no lessons needed to be learned from the first wave of the pandemic, as preparation for and handling of it had been appropriate in their view. Given all of these results, the fifth lesson to be learned is that healthcare workers and their individual situations are considerably heterogeneous. They have faced a variety of different consequences and challenges during the pandemic, and some have been affected more strongly than others. Therefore, solutions must be specific to varying circumstances and remain adjustable over time.
---
Limitations
The population of healthcare workers who directly attend to patients during the present COVID-19 pandemic is at the center of the topic. To date, no randomized sample with mandatory participation (or complete survey) has been drawn from this population in Switzerland. Therefore, clustered sampling was conducted for this survey, contacting the attendees of extra-occupational professional development courses at Careum Weiterbildung in Aarau. The vast majority of healthcare workers in Switzerland repeatedly attend such courses, and most of the institutions offering these courses follow a similar scheme. Careum Weiterbildung encompasses a wide range of attendees from different institutions, areas of healthcare, and geographical regions across Switzerland. The sample of this survey therefore was drawn from a very broad population of Swiss healthcare workers. It needs to be noted however, that participation was not mandatory within the cluster of Careum Weiterbildung. Therefore, randomness cannot be ascertained, nor excluded. Also, despite the teaching institutions being of a similar scheme, and despite the regions from which they attract students overlapping, homogeneity of the clusters is unproven. The sample size is limited. A larger sample, although not necessarily related to unbiasedness, could decrease the error probabilities on inferential statistical tests. Causal effects of the pandemic were assessed by directly asking the participants to do so themselves, whenever considered to be expedient, e.g., by asking "how has the COVID-19 pandemic affected your work situation?" Within the cross-sectional design of the study, concepts such as emotional distress and risk perception could not be tracked over time before/during the pandemic, as a panel or follow-up study could have. Moreover, all data was self-reported by the participants. Emotional distress was measured by four items. These were derived by three questions on how worried they were, as shown in Figure 2, referring to three different groups (/oneself) which the pandemic may threatened by the pandemic, with answer options on a four-point Likert scale. Also, the participants indicated whether they felt more stressed during work because of the COVID-19 pandemic, by answering a yes/no question (item W1 in Table 7). A seven-item validated scale of the fear of COVID-19 has been published by Ahorsu et al. (2020), which aims at differentiating emotions more strongly (feeling "afraid, " "uncomfortable, " "nervous, " having clammy hands, a racing heart, losing sleep) and could yield more detailed insight. Since this study was conducted for Swiss healthcare workers, understanding their specific situation at the time was crucial. Consequently, the findings may only be applicable to nations/healthcare systems, in which the first wave of the pandemic followed a comparable pattern.
---
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because of requirements of anonymity. However, the raw data supporting the conclusions of this article will be made available by the authors to any qualified researcher, excluding the demographic data and the free text answers, such that any inference that would breach the anonymity of an individual remains ruled out. Requests to access the datasets should be directed to MR, [email protected].
---
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
---
AUTHOR CONTRIBUTIONS
MR contributed to the quantitative methodology, data curation, and formal analysis. SG performed the literature research. Both authors contributed to the conceptualization, composing and online implementation of the questionnaire, writing and editing, project administration, and article. Both authors approved the submitted version.
---
Conflict of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. |
At the beginning of the 2020 coronavirus pandemic, the World Health Organization (WHO) stated that the use of masks was recommended only for sick people, not for healthy people. However, the development of the virus finally prompted WHO to appeal to everyone (healthy and sick) to always use masks outside the home in a qualitative descriptive research design with a sample of 50 respondents. Interview guide with research subjects, namely families of patients who are hospitalized or outpatient at RSUD Bangil as the instrument. Results : Based on the interview results, the majority of respondents had sufficient information about masks. Most respondents do not wear masks / masks are removed according to their wants and needs, because most occur due to stress or difficulty breathing. | INTRODUCTION
Coronavirus disease 2019 has attracted global attention since December 2019 (Saadata et al, 2020) and was declared a pandemic by the World Health Organization (WHO) on March 11, 2020(WHO, 2020) (Maulydia, 2021). ). According to Scheid et al. In 2020, after being declared a pandemic, Covid-19 believed to have originated in Wuhan, China, has now spread to more than 200 countries (Forouzandeh et al., 2021), the number of daily Covid-19 cases worldwide continues to increase (Maulydia, 2021). This virus has become a pandemic because it has spread throughout the world, including Indonesia. COVID-19 can cause mild to severe symptoms and even death. Mild symptoms commonly seen in COVID-19 patients are fever, mild cough, headache, anosmia, runny nose and sneezing, respiratory frequency 12-20 times per minute.
According to WHO data (2021), as of January 27, 2021, the number of Covid-19 cases worldwide was 99,864,391 positive cases, of which 2,149,700 people died. First, the most Covid-19 cases in Southeast Asia. positive cases in Indonesia reached 1,024,298 people and 28,855 people died (Marchel et al., 2022). In Indonesia, as of Monday, August 22, 2022, the number of patients infected with Covid-19 increased, a total of 3,300 new cases of corona, bringing the total number of people infected with Covid-19 to 6,112,658 people, active cases decreased by 48,803 cases. to 1,697 from the previous day, while the death toll was 157,396. Based on research data from the Central Bureau of Statistics of Pasuruan Regency, the number of patients infected with Covid-19 in 2020 was 68.36 cases, in 2021 the number of patients infected with Covid-19 increased to 1,034 cases. Research (Pratiwi, 2020) shows that although wearing a mask can protect themselves and others from Covid-19 infection, 35.5% of people rarely use masks and 6.7% do not use masks (Longrich &;Sheppard, 2020), infection can be at risk. for everyone, but also for relatives of patients who are in the hospital area. According to the results of a preliminary survey conducted around Bangil Hospital, Pasuruan Regency, of the 50 families who visited and treated patients at the hospital, as many as 60% (30 people) used masks incorrectly, 26% (13 people) used masks correctly. . , the remaining 14 (people) did not wear masks.
The increasing number of positive Covid-19 cases may be due to the easy spread of this disease (Setyawati et al., 2020). According to the Director General of P2P (2020b) of the Indonesian Ministry of Health, Covid-19 is caused by the severe acute respiratory syndrome virus Coronavirus Disease-2019 (SARS-CoV-2), which is transmitted between humans through droplets and contact. , the condition of the spread of Covid-19 that affects the economy, politics, social, culture, defense and security, as well as public welfare, requires efforts to overcome Covid-19 (RI, 2020). Global research-based mask use continues to provide limited evidence for influenza control and highlights potential problems such as poor mask adherence and improper use (Longrich &;Sheppard, 2020). A respirator is a respirator designed to protect people from inhaling pollutants or air pollutants, a respirator or respirator is not intended to replace an optional method of eliminating disease, but rather serves to adequately protect the wearer. Masks are often used to protect against particles and aerosols that can injure the airways of people who don't wear PPE. The dangers of particles and aerosols of varying sizes and chemical properties can harm humans, NIOS advises. Masks that use filters. However, the lack of public knowledge and understanding makes the use of masks by the public often neglected, even though the correct and appropriate use of masks is one way to prevent the spread and transmission of the Covid-19 virus. According to PMK No. 27 of 2017, it is expected that the public can follow the correct rules regarding the correct use of masks according to existing SOP instructions, because the correct or incorrect use of masks can affect infected or infected people. Disease. If the mask is not used properly, the use of used masks cannot work properly and provide optimal protection.
Covid-19 mitigation in the Behavior Change Division of the Covid-19 Handling Task Force is focused on improving 3M compliance, namely. H. wearing masks, maintaining distance and washing hands (FADEL MUHAMMAD, 2021). Other efforts that can be made by health workers to support the government's appeal on the use of masks can be done by providing clear and appropriate education and information to the public so that the public understands the importance and benefits of using masks correctly. Governments around the world are also drafting different guidelines (Setiadi, 2021). The use of a single mask by the whole community is a globally agreed effort recommended to limit the spread of asymptomatic and asymptomatic carriers in the community, which can be a major cause of the rapid spread of Covid infection. 19 (Atmojo et al., 2020). Based on FMD Regulation Number 27 of 2017 concerning Infection Prevention in Health Services regarding the correct use of masks according to existing instructions and SOP rules, researchers are interested in studying the behavior of mask use in patients' families. Bangil Hospital, Pasuruan Regency.
---
METHOD
This study uses a qualitative descriptive research design where researchers try to explore as much information as possible about the problem that is the topic of research by prioritizing verbal data. The type of research used in this study is a case study. The population in this study was all families of patients in the Bangil Hospital, Pasuruan Regency. Some of the families of patients who became key informants who were in the inpatient and outpatient rooms within Bangil Hospital, Pasuruan Regency totaled 50 informants. The sampling technique used is snowball sampling. The research design used in this study is a case study. The variable in this study was the behavior of the patient's family in wearing masks.
---
FINDING AND DISCUSSION
Table 1 Displays the informant characteristic data as follows: A. Respondents' knowledge of masks From the results of the interview, it showed that out of fifty respondents, twentynine respondents had sufficient knowledge, in this case, respondents were able to understand and know about masks, namely mask knowledge, mask benefits, types of masks, how to use masks correctly. This sufficient knowledge is obtained through various factors, such as books, mass media, online media, counseling from Puskesmas and from closest relatives who inform about the COVID-19 disease.
From the results of the study, it was found that the age of most respondents was over thirty-five years old. Where the older the age of the respondent, the better he will have a better level of knowledge. According to Galve et al., (2015), several studies explain that a person's age in the productive period has the best level of knowledge or cognitive. In addition, at that age also a person has extensive experience and ability to do activities that will certainly support his knowledge in everything. But on the other hand, as a person gets older, his commitment to something in decision making will also be higher (Abadi et al., 2019). ). In addition to age, judging from the level of education, a person's education level can affect knowledge because the acceptance and understanding of someone who has higher education is better than those who have low education. Knowledge is the result of knowing and this happens after someone senses a particular object. Most human knowledge is acquired through the eyes and ears. Knowledge is needed as support in generating self-confidence as well as attitudes and behaviors every day, so it can be said that knowledge is a very important domain for the formation of one's actions (Notoatmodjo, 2007).
According to the researcher's assumption, sufficient knowledge about COVID-19 prevention efforts, especially in the use of masks, will greatly affect public behavior in carrying out COVID-19 prevention efforts. People with sufficient knowledge are expected to make appropriate COVID-19 prevention efforts. Awareness will grow in the community to make efforts to prevent COVID-19 disease if residents have good knowledge.
---
B. Behavior of Mask Use in the Patient's Family in the Hospital Environment
The results showed that most of the patients' families or respondents did not use masks / removable masks according to their wants and needs, namely as many as 29 respondents (58%). According to Bloom (Notoatmodjo, 2007) that behavior is one of the determinants that affect the degree of health. Behavior is any intentional action of a person for a specific purpose. Behavior can arise as a result of being influenced by various factors. Lawrence Green (Notoatmodjo, 2007) revealed that behavior is influenced by predispocing factors such as knowledge, values and beliefs and attitudes. Enabling factors such as availability of funds and facilities, time and facilities and also reinforcing factors such as support from family and health workers. Using masks appropriately is basically to ensure maximum mask effectiveness and to avoid an increased risk of transmission, WHO has also recommended the use of masks appropriately during this pandemic to effectively prevent transmission of the corona virus (Ratriani, 2021).
Based on the description above, researchers concluded that behavior to improve health is influenced by individual perceptions of behavior to prevent transmission of COVID-19 disease and how to maintain health by using masks, the better their behavior will be to prevent transmission of Covid-19 disease to others.
---
C. How to Use Respondents' Masks During the Hospital Environment
The results showed that most respondents used masks in an incorrect or inappropriate way, namely as many as 33 respondents (66%). The masks used need to be ensured to cover the mouth, nose and chin perfectly and there are no gaps between the face and mask, respondents in this study have not met the recommendations and there are common mistakes such as using masks without covering the mouth and nose perfectly, or using loose masks this can provide an opening for viruses, bacteria and germs to enter or Covid-19 virus contamination can occur and can reduce the effectiveness of masks (Dwirusman, 2021; Guidelines on the Importance of Mask Use, 2020; Theopilus et al, 2020).
Another practice that is not appropriate in using masks in this study is the habit of lowering the mask to the chin, this can be caused by respondents feeling uncomfortable in breathing when using a mask so that the mask is often opened and lowered to the chin, this result is in line with research by Tan et al (2021), showing that more than 1/3 of respondents stated that they often or always lowered the mask under the chin (often 7.0%, always 9.4%) and 41.2% sometimes hang their masks under their chins. In addition, based on research by Hou et al ( 2020), the virus that causes Covid-19 is more likely to first infect cells in the nasal cavity because in the nasal cavity there are more ACE2 proteins which are the entrance of the corona virus compared to cells in the lower airway, This shows that the cells of the upper airway are more susceptible to infection, so the use of masks is recommended to cover the nose and mouth perfectly. According to WHO, everyone is required to always use a mask both when sick and healthy (Syam, 2021), based on the guidebook on the importance of using masks (2020), the purpose of using masks is to avoid the spread of droplets, so the mask used must cover the mouth, nose, to the chin in addition, the mask used should not be loose, because this can make air enter without being filtered by the mask eventually viruses and bacteria can enter the ducts breathing, so that in preventing Covid-19, it is not just about using masks, but the use of masks needs to be practiced properly and correctly so that the effectiveness of using masks is maximized.
According to the researchers' assumption, improper use of masks or lowering the mask to the chin is prone to making viruses attached to the outside of the mask move to the face and putting individuals at risk of inhaling harmful particles that stick to the surface of the mask.
---
D. Reasons Respondents Do Not Use Masks in the Hospital Environment
The results showed that the reason respondents did not use masks while in the hospital environment was mostly because of tightness or difficulty breathing as many as 23 respondents (46%). The data from the study shows that there is still a lack of public awareness of the importance of wearing masks. The research reflects people's unfavorable behavior.
Research by Siahaineinia and Bakara (2020), shows that there are several other reasons respondents do not use masks such as because they feel tight, uncomfortable, feel healthy and do not feel worried about the presence of Covid-19 and do not know the dangers of Covid-19, apart from the discomfort factor caused by using masks, it is necessary to see far more important benefits in using masks to reduce the spread of Covid-19 during this pandemic. In healthy individuals, wearing a mask even for a long time does not produce clinically relevant changes in circulating oxygen or carbon dioxide and does not affect tidal volume or respiratory rate However, wearing a mask does produce a slight increase in respiratory resistance caused by the mask material filtering airborne particles and aerosols. its discomfort (Scheid et al, 2020). In addition, according to Maclntyre (2015), discomfort can also affect an individual's decision to use a mask. A review of respirator performance and standards found that all types of respirators provide a burden of discomfort (Burton et al, 2021).
Based on the research above, according to researchers, in conditions like this, the government should continue to straighten the public's perspective that our country has not fully recovered from the threat of the virus and does not mean that it can relax by ignoring health protocols.
---
CONCLUSION
Respondents have sufficient knowledge, namely 29 respondents (58%), in this case respondents are able to understand and know about masks, namely the meaning of masks, the benefits of masks, types of masks, how to use masks correctly. The application of mask use behavior in most families of patients or respondents do not use masks / removable masks according to their wants and needs, namely as many as 29 respondents (58%). Most respondents use masks in an incorrect or inappropriate way, namely as many as 33 respondents (66%) i.e. respondents have not fulfilled the recommendations and there are common mistakes such as using masks without covering the mouth and nose properly. The reason respondents do not use masks while in the hospital environment is mostly because of tightness or difficulty breathing as many as 23 respondents (46%), this shows that there is still a lack of public awareness of the importance of wearing masks. |
Rural populations in the United States have lower physical activity levels and are at a higher risk of being overweight and suffering from obesity than their urban counterparts. This paper aimed to understand the environmental factors that influence physical activity among rural adults in Montana. Eight built environment audits, 15 resident focus groups, and 24 key informant interviews were conducted between August and December 2014. Themes were triangulated and summarized into five categories of environmental factors: built, social, organizational, policy, and natural environments. Although the existence of active living features was documented by environmental audits, residents and key informants agreed that additional indoor recreation facilities and more well-maintained and conveniently located options were needed. Residents and key informants also agreed on the importance of age-specific, well-promoted, and structured physical activity programs, offered in socially supportive environments, as facilitators to physical activity. Key informants, however, noted that funding constraints and limited political will were barriers to developing these opportunities. Since building new recreational facilities and structures to support active transportation pose resource challenges, especially for rural communities, our results suggest that enhancing existing features, making small improvements, and involving stakeholders in the city planning process would be more fruitful to build momentum towards larger changes. | Introduction
More than half of Americans do not achieve the current recommended levels of physical activity [1], and rural populations are even less likely to meet the guidelines compared to their urban and peri-urban counterparts [2,3]. Geographic disparities in physical activity may be driven in part by environmental factors in rural communities that limit opportunities to be active, including poor quality, limited availability, and inadequate access to recreational facilities, as well as geographic and topographic features that inhibit active living and transportation [4,5].
In the past decade, there has been a proliferation of interest in understanding the relationship between built environments and physical activity [6][7][8], and identifying effective strategies to promote active living [9][10][11][12]. However, much of the evidence supporting policy and environmental strategies to encourage active living comes from research in non-rural settings [6,[9][10][11]. Rural communities have several features relevant to the built environment that set them apart from more densely populated areas [13,14], including more dispersed populations, longer distances between destinations, lack of public transportation, distinct social norms and cultural practices, and different recreational environments. Further, rural communities have higher poverty rates and lower income levels than urban areas [15], which impacts individual opportunities, as well as tax revenues and funds available for improving active living structures. Given the unique physical and contextual challenges faced by rural communities, special considerations should be given to them when developing strategies to enhance physical activity participation in rural settings [14]. Existing policies and strategies to support active living through environmental changes, however, are mostly urban-oriented. For example, although national recommendations, such as the Common Community Measures for Obesity Prevention (COCOMO) published by the Centers for Disease Control and Prevention (CDC), are in place to guide the improvement of physical activity engagement, many of them are not applicable to rural communities [14]. These published strategies largely relate to proximity to schools, enhanced walking and biking infrastructure, improvement of public transportation, mixed-land use, and improved personal and traffic safety where people are usually physically active [16].
Physical activity is a multifactorial behavior and is influenced by a wide range of factors related to a person's surroundings [17]. Objective aspects of the environment as well as individuals' perceptions of their environment are likely to be important influences [17]. As rural communities are highly heterogeneous, factors influencing rural physical activity vary substantially across rural built environment studies, depending on the geographic and social contexts of the population being studied [8,[18][19][20][21]. Given this, it is advantageous to use a variety of data sources to understand the various dimensions and support a more nuanced interpretation of barriers and facilitators. Focus groups with residents are an effective way to explore a variety of issues in a community that influence behavior [22], while interviews with key informants can be used to gather additional information and provide further insights [23]. Data from objective environmental audits can elucidate complementary or contradictory aspects of the issues identified from focus groups and key informants' interviews [24,25].
Therefore, the purpose of the present study was to gather information from built environment audits, resident focus groups, and key informant interviews, to capture different factors that influence rural adults' physical activity engagement. As building new active living infrastructures often poses resource challenges to rural communities, our hope is to identify strategies that could potentially leverage rural communities' exiting assets and resources to improve rural health.
---
Materials and Methods
Figure 1 shows the overall data collection and analysis process. Data were collected and analyzed between 2014 and 2017 as part of formative research for the Strong Hearts, Healthy Communities trial, a rural community-based cardiovascular disease prevention program [26] in eight government-designated medically underserved rural Montana towns [16]. The study was approved by the Cornell University Institutional Review Board (Protocol #1402004505).
---
Setting
Montana is one of the most rural states in the United States, with close to two thirds of the population living in rural areas (64.7%) [27]. As of 2013, only 23.3% of the state's adult population met the national physical activity guidelines [28]. Study towns were selected to represent different geographic regions.
Table 1 shows the demographics of the study towns. In all towns, most people were non-Hispanic white and the median household income was below $50,000. Population density ranged from 213 to 1029 people per square kilometer, with a median age between 37.9 and 55.4 years [29]. To illustrate the rurality of our study towns, demographics of New York City are included in Table 1 for comparison.
---
Setting
Montana is one of the most rural states in the United States, with close to two thirds of the population living in rural areas (64.7%) [27]. As of 2013, only 23.3% of the state's adult population met the national physical activity guidelines [28]. Study towns were selected to represent different geographic regions.
Table 1 shows the demographics of the study towns. In all towns, most people were non-Hispanic white and the median household income was below $50,000. Population density ranged from 213 to 1029 people per square kilometer, with a median age between 37.9 and 55.4 years [29]. To illustrate the rurality of our study towns, demographics of New York City are included in Table 1 for comparison.
---
Built Environment Audits
In each town, we used a reliable community asset inventory tool [31], Inventories for Community Health Assessment in Rural Towns (iCHART), to assess active living characteristics related to physical activity opportunities. The tool organizes built environment characteristics into 11 categories: the presence of retail business, professional services, community services, town amenities, physical activity facilities, town aesthetics, condition of sidewalks, condition of the town center, condition of street and intersections, street and intersection safety features, and biking facilities. As mixed land use and active living characteristics are associated with physical activity [8,11], the assessment of these characteristics allows the understanding of the strengths and weaknesses of the built environment of the study towns. We also assessed the presence of stray animals given previous research indicating their relevance to outdoor activity safety in the midwest United States [18]. The iCHART tool contains a checklist of items that the researcher looks for and documents during the audit. Table S1 shows the individual items assessed within each built environment category.
In all towns, Meredith L. Graham (MLG) and either a local National Institute of Food and Agriculture (NIFA) extension agent or another research team member conducted the audits on different days. The audits were completed in two steps: (1) a one-mile walking tour from town center and (2) a four-mile "windshield" tour. The windshield tour allowed for identification of built environment features that were difficult to observe on foot or may not be within walking distance. Discrepancies between audits were discussed for consensus.
---
Resident Focus Groups
NIFA extension agents and their local partners recruited overweight (body mass index (BMI) ≥ 25.0) and sedentary (<30 min of physical activity per week) adults aged ≥40 years to take part in focus group discussions. Recruitment strategies included press releases, flyers, website posts, word-of-mouth referrals, and direct contact with community residents. To confirm eligibility, we asked potential participants to complete a screening survey with questions about age, height, weight, and physical activity level.
Focus groups were stratified by age (40-64, 65+) and gender, as different responses were expected according to different age and gender groups. Discussions lasted between 60 and 90 min and were facilitated by MLG, an experienced qualitative researcher. The discussion guide was based on an ecological framework [32] (pp. 465-486) and developed by Meredith L. Graham (MLG), Sara C. Folta (SCF), and Rebecca A. Seguin (RAS) to explore participants' attitudes, perceptions, barriers, and facilitators to physical activity. Prior to use, the discussion guide was pilot-tested and refined. Participants provided written informed consent and completed a brief demographic and health behavior questionnaire. Sessions were digitally recorded for transcription. Participants were compensated $50. Table 2 shows a sub-set of the focus group guide questions that were relevant to the present study.
---
Key Informant Interviews
In each town, we conducted phone interviews with three key informants (n = 24) identified by NIFA extension agents. Key informants represented diverse areas of community leadership, including recreation, local government, public health and healthcare, social services, community programming, and business. Because of the inherent confidentiality concerns in conducting research in small rural communities, key informant characteristics will not be reported in detail. The interview guide focused on locally relevant environmental influences on physical activity. As with the focus groups, we piloted and revised the interview guide prior to use. Most interviews lasted between 45 and 60 min and all were digitally recorded and transcribed verbatim. Key informants provided verbal consent and were compensated $25. Table 3 shows a sub-set of the interview guide questions that were relevant to the present study. Barriers Tell me about policies, physical or social aspects in this community that make physical activity more difficult.
---
Facilitators
Tell me about programs, policies, physical and social aspects in this community that promote physical activity.
---
Programming
In your opinion, what could be done to improve the environment that would make it easier for people to be active? What types of opportunities or programs to improve their health might people in this community be interested in?
---
Analysis
We enumerated items identified through the built environment audits by category (Table S1). Brian K. Lo (BKL) and Emily H. Morgan (EHM) performed thematic analysis of the qualitative data [33]. Guided by an ecological framework [32] (pp. 465-486), we developed an initial codebook using a "lumping" technique to look for overarching themes and coded transcripts into three broad categories of influences related to physical activity participation: individual, environmental, and socio-cultural influences [34]. Brian K. Lo (BKL) then used a "splitting" technique to look for more detailed themes or smaller categories within the three broader categories of influences [34]. A subsequent codebook relevant to the present study was then developed and it was continuously discussed and refined among the research team. Brian K. Lo (BKL) and a research assistant independently coded a subset of the data using the refined codebook with an observed agreement >90% between the coders. Any discrepancies were then discussed among the research team to reach consensus. Data were then recoded using the final codebook.
We first analyzed the focus group and interview data separately to identify major and minor themes in each dataset, and then compared and contrasted emerging themes. We organized themes into five categories of environmental factors guided by the Ecological Model of Active Living, developed by Sallis et al. [32] (pp. 465-486): built, social, organizational, policy, and natural environments. To facilitate data triangulation, we transformed results from the built environment audits into qualitative narrative summaries and then compared them with the themes identified in the focus group and interviews. We used NVivo (Mac 11, QSR International Pty Ltd., Doncaster, Victoria, Australia) to assist coding and analysis.
---
Results
A total of 118 adults, aged 40-91 years, participated in the focus group discussions. The socioeconomic characteristics of participants broadly aligned with the composition of the communities in which they lived. Although all participants were sedentary and outside of the optimal BMI range (mean BMI = 31.9 ± 5.77), three-fourths (75.4%) rated themselves having "good" to "excellent" health.
Figure 2 below summarizes the sub-themes identified with each data source.
---
Results
A total of 118 adults, aged 40-91 years, participated in the focus group discussions. The socioeconomic characteristics of participants broadly aligned with the composition of the communities in which they lived. Although all participants were sedentary and outside of the optimal BMI range (mean BMI = 31.9 ± 5.77), three-fourths (75.4%) rated themselves having "good" to "excellent" health.
Figure 2 below summarizes the sub-themes identified with each data source. Built environment audits identified a range of active living assets in each town (Table S1). We identified retail businesses, professional services, community services, outdoor physical activity facilities, outdoor lighting, aesthetics, and clean and wide sidewalks in all towns. In many communities, we did not observe continuous and even sidewalks, street intersection safety features, or biking facilities. We documented stray animals in six out of eight of the towns. The presence of other built environment features, such as indoor recreational facilities and trails, varied considerably between communities.
---
Interpretation of Built Environment Audits, Focus Groups, and Interview Data
Emergent themes within the five categories of environmental influences were described in relation to the observed built environment characteristics. Representative quotes are presented in Table 4. Built environment audits identified a range of active living assets in each town (Table S1). We identified retail businesses, professional services, community services, outdoor physical activity facilities, outdoor lighting, aesthetics, and clean and wide sidewalks in all towns. In many communities, we did not observe continuous and even sidewalks, street intersection safety features, or biking facilities. We documented stray animals in six out of eight of the towns. The presence of other built environment features, such as indoor recreational facilities and trails, varied considerably between communities.
---
Environmental
---
Interpretation of Built Environment Audits, Focus Groups, and Interview Data
Emergent themes within the five categories of environmental influences were described in relation to the observed built environment characteristics. Representative quotes are presented in Table 4.
---
Built Environment
In all focus groups, participants described the presence of physical activity facilities, such as sports fields, recreation centers, swimming pools, or gyms, confirming the findings of the built environment audits. Focus group respondents in several communities also described the common use of non-traditional or mixed-use spaces for physical activity, such as school athletic facilities and hotel swimming pools, although personal connections with facility management were sometimes required. For those who lived outside of town, distance hindered facility use. Focus group participants and key informants commonly expressed a desire for larger and more diverse recreational centers with a broader range of physical activity opportunities, especially in the winter.
Many participants specified that walking was the preferred form of physical activity. Participants' comments reflected findings of the built environment audits that variation existed in the presence, condition, and safety of the sidewalks and streets in town centers. Several focus group and interview participants felt that investment was needed in pedestrian-friendly features, and that the lack of sidewalks, poor sidewalk quality, and the presence of stray animals were major barriers to walking outside. In addition, town walkability was subject to weather conditions, especially in communities where sidewalks were not adequately cleared of snow.
---
Social Environment
The audit tool did not capture social environments, but the focus groups identified them as crucial to rural residents' usage of the available physical activity resources. The social environments at recreational facilities influenced some residents' physical activity behaviors. Some participants perceived sports fields and gyms as youth spaces, and felt out of place in these spaces.
Several participants described a desire for fitness opportunities with people of a similar age and fitness levels. Some said that they felt uncomfortable exercising with younger and more physically fit people, while others referenced the camaraderie of a peer atmosphere. The desire for age-appropriate opportunities was supported by key informants' complaints about the lack of adult-specific programs and activities in rural communities. The benefits of structured exercise classes with peers was discussed more commonly in focus groups with women than in groups with men. To some women, a socially familiar environment provided a sense of safety for physical activity.
---
Organizational Environment
Facility and program schedules strongly influenced use and engagement. People who worked business hours or lived far from facilities reported that operating schedules often did not meet their needs. When multiple users shared facilities, access for children and youth was viewed as the priority.
To some male residents, the quality of facilities was an important factor, and outdated equipment was cited as a barrier to use. Further, both residents and key informants agreed that cost (e.g., membership fees, class fees) was a barrier to participating in some activities. For example, several participants reported that they enjoyed skiing, but found regional ski resorts to be cost-prohibitive.
In addition, information dissemination influenced participation in scheduled physical activity. Many participants described social networks-friends and family-as the means by which coordinated opportunities, such as pick-up basketball games and fitness classes, were promoted most often. Some described feeling excluded because they were less connected to networks in which this information would be shared. In addition, although many residents were able to identify local programs and opportunities, they were not always aware of specific schedules or content. Key informants suggested that the greater promotion of existing opportunities for physical activity would be helpful.
---
Policy Environment
Residents cited poor city planning and lack of maintenance of existing facilities as barriers to physical activity. These comments often were related to perceptions that local governments do not have funding or interest in promoting physical activity to adults. In some towns, key informants also expressed this belief. However, although several key informants criticized the lack of sidewalks and sidewalk quality, in one community, the key informants reported improvements in these features, suggesting differences in political agenda and priorities between jurisdictions.
---
Natural Environment
Although seasonal factors and the natural geography were not captured in the built environment audits, the natural geography of Montana, including its diverse terrain, open spaces, and water features, was raised in all focus groups, and in some interviews as an important facilitator for leisure-time physical activity, including hiking, running, skiing, hunting, and fishing. However, while the warmer months favored outdoor activities, extreme winter weather was described by focus group participants and key informants as a significant barrier to outdoor physical activity.
---
Discussion
The aim of the present study was to use information gathered from built environment audits, resident focus groups, and key informant interviews to understand factors that influence physical activity among sedentary, overweight, midlife and older adults living in rural communities. While built environment audits provided a blueprint of the characteristics of the built environment, resident focus groups and key informant interviews provided critical contextual information on how the rural built environment encourages or discourages physical activity.
In the eight towns sampled in this study, spaces for physical activity were available but were not always perceived to be "activity-friendly" for adults. For example, although some residents expressed their desire to use the athletics facilities at schools, these spaces are usually prioritized for use by youth sports. Competition for physical activity spaces in rural settings has not been discussed extensively in the literature, but suggests a need for shared-or open-use policies with schools outside school hours to enable structured physical activity programs that are tailored to adults [14,35,36]. In the focus groups and interviews, we commonly heard about the use of "non-traditional" spaces for physical activity. Through community audits, we identified a range of other possible spaces that potentially could be utilized for hosting physical activity programs, such as churches, libraries, and municipal buildings. Other studies have found that non-traditional facilities, such as community centers, churches, and worksites, often are used for both planned and spontaneous physical activities in rural communities [19,37]. As funding for new construction and facility management is limited, attention towards optimal utilization of existing facilities is warranted.
Although previous studies have found social support, such as accountability to family and friends, company from pets, and peer influences, to facilitate engagement in physical activity among rural adults [18,38], triangulation of our findings adds to the literature that social networks also play a critical role in promoting physical activity opportunities among rural residents. Events, classes, and activity groups often are organized informally and publicized by word-of-mouth and social media. Participants also described that local connections and promotions for access to physical activity equipment at private facilities, such as those in hotels and hospitals. Finding ways to broaden promotion of existing and emerging opportunities and to support the operators of private physical activity spaces in expanding access should be considered.
Previous research has found the aesthetics of rural towns to be associated with physical activity levels [39,40]. All the rural towns in this study had vibrant town centers with pedestrian-friendly features and facilities for physical activity. Nonetheless, poor city planning and inadequate maintenance of facilities were perceived to be barriers for using existing indoor and outdoor spaces. For people living outside of town, geographic dispersion and a lack of transportation hindered utilization of available assets in town centers. Instead, many favored outdoor activities in the countryside, such as hiking, skiing, hunting, and fishing. These findings suggest that land-use policies affecting spaces both within and outside of towns are critical for the promotion of physical activity. Active and early engagement of residents in local planning and management processes may help improve coherence between resident demands and policy decisions [41].
This study also identified several important considerations for future built environment research in rural communities. First, we found the qualitative research extended the findings from the built environment audits and enriched our understanding of how communities perceive and interact with existing built environment features. As rural physical activity is complex and multifactorial, using a single assessment approach may limit the breadth and application of findings.
Second, our findings suggest that some constructs in the built environment audit tool are not relevant to all rural towns. For example, some street and intersection safety features, and biking facilities were not observed, and were not mentioned in either the focus groups or the interviews. This differed from what has been identified in other rural studies [42,43], and could be attributable to the composition of our participant sample. Our findings also suggest a benefit to collecting more details about recreational facilities and their operational characteristics (e.g., distance to residents' homes, hours of operation, and quality of equipment), because of the important role of these factors in facility utilization [35,37]. Additionally, in locations where outdoor activity is popular, it may be helpful to gather data on natural geography features that provide opportunities for exercise such as trails, lakes, rivers, and mountains.
Third, there may be a benefit to adapting rural built environment audit tools to capture more places where residents are physically active. For example, we observed good to excellent condition of sidewalks in all town centers, but learned from local residents that sidewalk maintenance and availability was often suboptimal in certain areas of town where they felt it was needed. It is possible that the community audits could be enhanced by separately assessing features in town centers and outside of town (rather than combining the results of the walking and windshield tours) or by consulting local residents about the places and spaces that they go before creating the community maps.
The present study has few limitations. First, data collection was limited to rural Montana. However, we believe that our purposeful selection of diverse towns likely improved the relevance of the findings to rural communities in regionally proximate states, such as Idaho, North Dakota, South Dakota, and Wyoming, where population characteristics are similar. Second, our research focused on mid-life and older sedentary adults. It is likely that younger and more active residents may have different perceptions of physical activity opportunities and may interact differently with the environment. Third, as our study was cross-sectional, it is possible that some seasonal factors may have been missed. Finally, because we did not specify the full range of types of physical activity (e.g., recreational, functional, leisure, etc.) that we wanted to learn about to study participants, they may have limited their discussions to the types of activity that are most often discussed in the popular press.
---
Conclusions
Our findings suggest that rural communities have a number of built environment assets that promote active lifestyles but that their potential may not be fully realized. Given resource constraints and competing priorities, building new recreational facilities and structures to support active transportation is unlikely in many rural communities. However, enhancing existing features (e.g., current facilities and natural assets) and identifying opportunities to maximize their use, such as increasing the promotion of classes and available spaces, and revisiting scheduling, could support physical activity and help build momentum towards larger changes. Involving residents along with other stakeholders in the city planning process should be a priority. Our experience suggests that future rural active living research would benefit from use a triangulation approach to enhance understanding of unique characteristics of rural communities and identify relevant strategies for improving physical activity opportunities.
---
Supplementary Materials:
The following is available online at www.mdpi.com/1660-4601/14/10/1173/s1, Table S1: Individual item availability for the built environment audits.
---
Author Contributions: All authors contributed extensively to the work presented in this paper. Brian K. Lo and Nicolette V. Jew wrote the first draft of the manuscript. Brian K. Lo, Emily H. Morgan and Laurel F. Moffat performed the data analysis. Meredith L. Graham and Lynn C. Paul implemented the study and collected the data. Sara C. Folta, Miriam E. Nelson, and Rebecca A. Seguin designed the study. All authors discussed the results and implications and commented on the manuscript at all stages. All authors also provided the approval of the final version of the manuscript.
---
Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. |
Recent years have seen a considerable shift in the focus of public investment agencies from extensive roadway networks to a more planned approach that meets environmental, cost, and social dimensions more aptly. Past research has mainly explored the engineering aspect and cost parameters, while the human or social component is often neglected. This study aims to identify the trip-making behaviour of residents in an urban area towards bus transport network enhancement. Abu Dhabi, the location of study, is heavily dependent upon car travel, creating much congestion, which the local government seeks to address by enhanced public transport. This work examined eight public-transport routes in two zones, with data collected on both weekdays (n = 751) and weekends (n = 769). Multinomial logistic regression models showed that respondents highlighted overcrowded buses and traffic congestion as two of the main hurdles pertinent to urban routes in the bus network influencing their mode choice. Proposals pertinent to the local authority for further consideration need to factor in current low satisfaction with bus transit network coverage, low satisfaction with the quality of bus rides, inhibiting a mode shift from cars/taxis towards buses, cumulative income profiles of public-transport users, with findings that the low-income bracket is already at saturation, and that reducing congestion needs innovative (sociodynamic rather than technical road network) public-transport solutions. | Introduction
The situation of urban roadway asset management is complicated by the social, environmental, political, and budgetary constraints of transportation agencies, making sustainability the primary concern [1]. Current research on life-cycle assessment of roadways has focused on either material type or pavement overlay (e.g., Hasan et al. [2], Santero et al. [3] and AzariJafari et al. [4]). The policies and decisions involved with investments on transportation infrastructures interact with exogenous variables of urban density, traffic congestion due to specific roadway design (e.g., single vs. multiple carriageways, intersections and length, and public-transport lanes), and number of lanes [5]; however, complimentary facilities of on-street parking and adjacent parking zones [6] must also be considered. Shoup [6] and Hawas et al. [7] noted that the decision of commuters to choose between private vehicles and public transport is affected by these factors. The choice of travellers to lean towards any mode choice, regardless of its private or shared nature, is affected by the trip purpose, perceived service quality, and service attributes, which are sensitive to the individuals and are largely impacted by the transit options available in a region, social and cultural norms, and the trade-off between perceived service and quality [8]. Mass-transit systems also open a window of opportunity for any urban area to reduce its transportation-related cost and environmental burdens [9]. Yet, due to the high investment cost requirement and involvement of the public, administrative agencies demand that the perception of transportation system functionality and attributes is an integral part of planning.
A number of studies [10,11] have suggested that the attitudes of people towards transportation systems are increasingly becoming complex as their understanding of daily commute, safety, travel duration, and ride quality change. This is especially significant since transport authorities have to optimise the public system against multiple competing private, short-term rental, shared, and on-demand options. In a changing transit mode choice environment, micro-mobility integration can affect public-transport attraction compared to other modes, particularly in the wake of a return to normalcy after the lifting of COVID-19related mobility restrictions, where increased mobility is expected [12]. The importance of an accessibility-based approach in this context was recently explored in a study by Ali et al. [13], which highlighted that focusing on accessibility to plan transport solutions is especially significant for resilient transport planning, and that transit solutions should be gauged through travel time.
Nonetheless, the association between public-transit accessibility and usage frequency is not a recent topic in transport policy research and has been addressed in multiple studies soliciting usage patterns and underlying contributory variables from questionnaire surveys. As one of the earliest explorations, an empirical study on the association between travel behaviour of urban travellers and scale of the urban neighbourhood was conducted by Krizek [14]. They found that the reported vehicle miles travelled were reduced if the accessibility of the neighbourhood increased. Another study focusing on the effect of urban form on the variation of travel behaviour was conducted by Pan et al. [15] in four selected Shanghai neighbourhoods. They proposed that, if neighbourhoods are designed with denser street networks, the increased reliance on private vehicle travel induced by higher incomes may be replaced by bicycle/pedestrian trips.
The reasons for a traveller to choose to drive to work in a small urban English area were investigated by Gardner and Abraham [16] through 19 semi-structured interviews with private vehicle users. They found that the decision may be primarily driven by monetary costs, effort minimisation, origin destination, and time spent in journey as the public transport was perceived to be comparatively slower. Delays in the public-transport system and lack of reliance on its schedule, strikes, and perceived safety were noted as critical factors. Private vehicle commuters also highlighted the importance of the public-transport system in dealing with the problem of parking, which has been noted by researchers [17,18] as responsible for traffic congestion, as well as the cost and environmental burdens of a roadway system.
Another early qualitative study to identify the mode choice attitude of car and publictransport users was conducted by Beirão and Cabral [8]. They found that the mode choice is affected by situational variables, perceived performance, journey type, and user lifestyle and characteristics. They proposed that the policy-making process should accommodate customer expectations so that the usage of public-transport systems can be increased. In the context of transitioning economies, Grdzelishvili and Sathre [19] investigated the travel behaviour of Tbilisi residents. They identified perceived safety, comfort, frequency, and time as the most important factors that tended to skew the survey respondents towards private vehicle ownership and usage.
The service quality attributes of public transport and the social dynamics associated with car use and ownership factors were also found to influence the travellers' mode choice in a study by Javid et al. [20], where it was also observed that, in order for a publictransport authority to motivate more users towards public bus transport, survey responses and service quality attribute optimisation should be conducted. In a follow-up study [21], the authors argued that, in a mixed-mode use environment where multiple competitive transport choices including private cars, public bus transport, shared bus transport and shared taxis, and car-rental services are present, it is absolutely imperative to analyse the user satisfaction according to service quality attributes. Data on bus transport service riders using flexible on-demand service options of two app-based bus services were collected using a questionnaire survey. The authors found that waiting time at the bus stop, income profile and profession of the travellers, vehicle ownership, and trip purpose were significant predictors of mode choices by applying factor analysis and structural Equation modelling techniques. The study also noted that a positive perception of the users affects their usage tendency for bus transport. Students and privately employed individuals were more inclined to use bus services, whereas increases in bus-stop waiting time and travel time, as well as low coverage/accessibility, negatively affected usage.
The City of Abu Dhabi has witnessed an increase in population, accompanied by an increasing dependence of commuters on private vehicle use, resulting in traffic congestion in urban Abu Dhabi localities [22]. Most studies in the country have focused on road transport from the infrastructural [23,24], environmental [25,26], cost [27,28], or traffic safety [29] perspectives, while some [30] extended the research to operational and facility management issues for providing connected pathways from urban communities to city centres and central business hubs and districts. However, the above-referred studies hinted that the urban transportation network cannot be solely evaluated by conventional cost and environmental aspects; a social aspect should also be considered. However, very few studies have addressed the travel behaviour and perception of urban public-transport network in Abu Dhabi city.
The literature review conducted above highlights that, in order to promote public transport among urban travellers in a mixed-mode choice environment, the critical variables of service quality, accessibility, and travel time need to be optimised. However, the definition of the former two attributes is scattered in literature and regional-dependent, including aspects such as safety, onboard facilities, connection (waiting stops/stations) facilities, and cost for service quality and network coverage, service frequency, seating, and community inclusion for accessibility. Additionally, most studies were conducted in the European region, where the public-transport system is well-developed, and car-free precincts exist that regulatorily and culturally promote sustainable transit options (walking and public transport) over car use. Studies conducted in the developing world either focused on on-demand bus mobility options or did not include income-and employment-related variables, which affect the affordability and lack of choice parameters, potentially rendering travellers incapable of choosing costlier options over public-transport services. Abu Dhabi provides a unique opportunity to investigate the travel mode choice patterns of multicultural residents belonging to different income groups and sociodemographic categories in a highly developed infrastructural yet car-dependent urban setting, where enhanced public-transport planning informed by public preferences can trigger a positive shift. This study attempts to address this by soliciting public responses on a questionnaire survey to establish a clear definition of service quality, accessibility, and travel time attributes for Abu Dhabi (and, by extension, the multicultural countries in the Gulf Cooperation Council), which can result in public-transport uptake, as well as produce insights into the various sociodemographic classes that utilise this mode for their transit needs.
---
Method
This work was conducted to form the basis of a pilot study exploring the application of innovative mass transit over the lifecycle of a transportation infrastructure asset. The strategy of this study was intended to primarily focus on capturing the use of public transport, specifically bus transport, in the urban area of Abu Dhabi. Travel behaviour, user demographics, attitude towards travel, and trip distribution are emphasised.
---
Questionnaire Design, Bus Routes, and Sample Size Selection
The questionnaire used for data collection was designed for soliciting the travel information of bus users, perception of existing bus network, demographic profile of the service benefiters, and their respective attitude towards travelling attributes: network coverage, quality and satisfaction, perception of congestion, and potential improvement strategies. The questionnaire was limited to 11 multiple-choice questions designed to take less than 5 min. The detailed questionnaire including the sub-questions and options is provided in the Supplementary Materials. The survey was administered using the CAPIbased surveying methodology for on-site data collection. Teams of multilingual surveyors administrated questionnaires in both Arabic and English, along with the capabilities to assist passengers from various ethnic and lingual backgrounds.
The purpose of study was to target the steady growth in urban Abu Dhabi; as such, according to the DoT observations, the area between Corniche and Hazaa Bin Zayed the First Street was selected. To increase the range of collected samples, interviews were conducted on both weekdays and weekends during two 8 h shifts. The collected data were collected and tabulated using MS Excel. A total of 769 interviews on the weekend and 751 interviews on weekdays were completed, and all responses with missing or incomplete responses were disregarded according to the exclusion criteria set by the local transport authority responsible for data collection, curation, and management.
---
Data Analysis
An analysis was performed of the survey data collected for the Public-Transport Usage Study of the Abu Dhabi Department of Transport, as part of the Abu Dhabi government's initiatives to reduce travel dependency on cars and reduce the increasing traffic congestion problems currently being observed in the city.
Logical checking of data consistency was performed of the raw data MS Excel files to address data sparseness, outliers, and missing data. Interlinking of passenger demographics against travel attributes resulted in minor data revisions. The revised data were broken down into three different sections: distribution of generated trips for each mode (i.e., bus and car travel) and current level of network coverage. The literature review section highlighted that accessibility, service quality, and travel time affect the mode choice of travellers; however, the exact distribution and inclusion of variables within these attributes differed across studies. In order to investigate this further and estimate the effect of including service frequency, perceived congestion, onboard situation, trip purpose, and bus-stop and coverage facilities on public-transport uptake over competing modes (particularly private cars), four different regression models were tested with two primary objectives: if inclusion of the variables in TSC, TA, and SDV blocks improved model fit, and which parameter is a strong predictor.
Three different variable blocks were created for statistical analysis, with the variable description presented in Table 1 and comparison method explained in the last three rows. The analysis was conducted in SPSS v22. Models were controlled for the reference category in ordinal regression analysis (i.e., very satisfied for NetCovSat, five or more times a week for FBT, and first time for FCT). Reference categories were selected to identify the comparative influence of independent variable blocks on decreasing the satisfaction level of the public-transport system and increasing reliance on travel by taxis and private cars.
---
Results and Discussion
The results of the analysis were tabulated in separate sheets according to respective occurrence on the weekdays and weekends. Statistical analysis suggests that the majority (57%) of the survey respondents were South Asians, regardless of weekday or weekend. Moreover, the younger (i.e., 25-34 years old; 48% for both weekdays and weekends) male population (weekdays: 86% and weekends: 89%) largely working in the full-time workforce formed the largest (83%) proportion of the respondents. According to the previously recorded statistical distribution of the Abu Dhabi city residents, these results are representative of the local population, which is predominantly (62%) male in the under 34 years old (66%) age group, with over 50% being of South Asian descent [31,32]. The income profile captured in the survey showed that the majority earned a gross monthly salary of AED 1000-5000, which is also in line with the findings of these previous statistical studies which found the majority to be full-time workers earning an average monthly salary of AED 3500.
Regarding the statistical response distribution of the qualitative data variables, the majority perceived bus travel as an uneasy transit mode, yet found them to not be very crowded; however, respondents were unsatisfied with the current distribution of the bus stops on the surveyed networks as they reported spending over 15 min to reach the nearest stop. Additionally, the majority had a neutral perception of current travel time while using public bus services and either had a good or neutral perception of the current conditions of bus stops.
To address the research question of travel behaviour patterns and what variables define service quality, accessibility, and the eventual mode choice, three multinomial dependent variables (MDVs) were identified: frequency of bus travel (FBT), frequency of car travel (FCT), and network coverage satisfaction (NetCovSat). NetCovSat was originally recorded on a Likert-type scale in the order of decreasing likeability of the DV, whereas FCT and FBT were arranged with "1" representing more frequent travel (i.e., five or more times a week) and "6" representing the least travel first time. The percentage distribution of respondents on each scale was used to reverse-recode FBT and NetCovSat so as to represent a higher occurrence with increasing numeral order.
The probability of mode choice for a traveller was affected by several parameters and factors of transportation system characteristics, travel attributes, and sociodemographic variables, as shown in the multistage multinomial logistic regression models summarised in Table 2. Results from the weekday analysis are presented first, followed by the weekend analytical analysis results. Similarly, variables from each block were carried forwards to subsequent analysis except for SDV-independent variable block, which was separately performed, and the variables were then added in the logistic regression equations to perform the final analysis.
Odds ratios, i.e., the probability that a certain variable may influence the outcome of the model when all other variables are controlled, as well as model fit and significance level, of the regression models for recorded polychotomous variables are provided in Table 2.
---
Factors in Traveller Satisfaction from Public-Transport Network Coverage
Analysis results showed strong correlation between the transportation system characteristics and satisfaction of public-transport system users, as also reported in the literature. Results were generally similar across weekdays and weekends. Distance to nearby bus stop was only negligibly identified as an obstacle across all four variable blocks, with the OR remaining in the range of 0.98-1.004, signifying a relatively unimportant association. Strong correlation of traveller satisfaction with frequency of buses and network coverage was also noticed, with odds ratios < 1 (Table 2) for all variable blocks with high significance, implying that, as users perceived buses to be more frequent, the probability of respondents being satisfied with the network coverage also increased. These results are partially supported by findings from similar cultural contexts in literature, where service frequency [33] and network coverage [34] were found to define the accessibility parameter, yet the direct correlation between parameters was not estimated.
The second main concern of public-transport users was the journey time, where increasing satisfaction with time spent on a trip was associated with a higher rating of network coverage (OR ≈ 0.586-0.642 > 1). Figure 1 also shows this strong association, whereby 37% of weekday users and 48% of weekend users were satisfied with the coverage of the public-transport network. Most respondents (40% weekday, 53% weekend) were satisfied with the frequency of buses and journey time (47% weekday, 53% weekend). In general, public-transport users were more satisfied with network as these two factors became more satisfactory. These results somewhat comply with the findings of Gibson et al. [35] which compared rapid bus lanes against mixed traffic, finding that savings in the user time represented one of the most important benefits, and that its relation with network coverage was similar to service frequency following response curves that displayed an exponential or power model style trend.
Eng 2023, 3, FOR PEER REVIEW 8 coverage was also noticed, with odds ratios < 1 (Table 2) for all variable blocks with high significance, implying that, as users perceived buses to be more frequent, the probability of respondents being satisfied with the network coverage also increased. These results are partially supported by findings from similar cultural contexts in literature, where service frequency [33] and network coverage [34] were found to define the accessibility parameter, yet the direct correlation between parameters was not estimated.
The second main concern of public-transport users was the journey time, where increasing satisfaction with time spent on a trip was associated with a higher rating of network coverage (OR ≈ 0.586-0.642 > 1). Figure 1 also shows this strong association, whereby 37% of weekday users and 48% of weekend users were satisfied with the coverage of the public-transport network. Most respondents (40% weekday, 53% weekend) were satisfied with the frequency of buses and journey time (47% weekday, 53% weekend). In general, public-transport users were more satisfied with network as these two factors became more satisfactory. These results somewhat comply with the findings of Gibson, et al. [35] which compared rapid bus lanes against mixed traffic, finding that savings in the user time represented one of the most important benefits, and that its relation with network coverage was similar to service frequency following response curves that displayed an exponential or power model style trend. Sociodemographic variables of nationality and income showed little effect on the probability of a user to respond favourably with regard to network coverage (OR ≈ 1), and only a slight influence of age (OR ≈ 0.8, for SDV and third-stage models). Users were also asked if their mode choice was influenced by travel attributes and closeness to work, and family was reported by all users as most important. The main reasons stated for dissatisfaction with PT network coverage were crowded buses (67% and 70%) and traffic congestion (~50%). This suggests capacity distribution in public buses and traffic congestion on Sociodemographic variables of nationality and income showed little effect on the probability of a user to respond favourably with regard to network coverage (OR ≈ 1), and only a slight influence of age (OR ≈ 0.8, for SDV and third-stage models). Users were also asked if their mode choice was influenced by travel attributes and closeness to work, and family was reported by all users as most important. The main reasons stated for dissatisfaction with PT network coverage were crowded buses (67% and 70%) and traffic congestion (~50%). This suggests capacity distribution in public buses and traffic congestion on roads as critical issues, as also noted by Tyrinopoulos and Antoniou [36]. Further illustrating this, Figure 2 shows that, for the travellers that were largely dissatisfied with the current network coverage of the public-transport service in the studied region, onboard crowding and traffic congestion were noted as significant variables influencing their perception of the public transport. On the other hand, Figure 2 also shows that the satisfied traveller groups largely considered bus travel as the easier transit mode for their work-and family-related trips.
Eng 2023, 3, FOR PEER REVIEW 9
Figure 2. Perceived obstacles in public-transport user satisfaction level. The variable description for the legend is described in Table 1.
---
Crosslink between Travel Mode Choice and IDV Blocks
Anticipated yet contrasting results were obtained for the transportation system characteristics across all variable blocks of travel mode choice models. A user's choice of mode was relatively unaffected by the distance from the bus station (OR ≈ 1), while journey time adversely influenced travel by both car and bus. Users reported that, in bus travel, the likelihood of trip frequency tended to decrease with increasing traffic (OR generally >1). On the other hand, increasing quality of ride positively affected the frequency of bus travels (OR generally <1) as also validated by the trendline shown in Figure 3, despite the scattered nature of traveller percentage. Perceived obstacles in public-transport user satisfaction level. The variable description for the legend is described in Table 1.
---
Crosslink between Travel Mode Choice and IDV Blocks
Anticipated yet contrasting results were obtained for the transportation system characteristics across all variable blocks of travel mode choice models. A user's choice of mode was relatively unaffected by the distance from the bus station (OR ≈ 1), while journey time adversely influenced travel by both car and bus. Users reported that, in bus travel, the likelihood of trip frequency tended to decrease with increasing traffic (OR generally >1). On the other hand, increasing quality of ride positively affected the frequency of bus travels (OR generally <1) as also validated by the trendline shown in Figure 3, despite the scattered nature of traveller percentage.
acteristics across all variable blocks of travel mode choice models. A user's choice of mode was relatively unaffected by the distance from the bus station (OR ≈ 1), while journey time adversely influenced travel by both car and bus. Users reported that, in bus travel, the likelihood of trip frequency tended to decrease with increasing traffic (OR generally >1). On the other hand, increasing quality of ride positively affected the frequency of bus travels (OR generally <1) as also validated by the trendline shown in Figure 3, despite the scattered nature of traveller percentage. Urban populations tended to be unevenly distributed towards transport usage patterns as public-transport use tended to skew towards lower-income brackets. The Abu Dhabi population exhibited similar trends when analysed for sociodemographic variables, as shown in Figure 4, with the majority of users from the lower-middle-income bracket (1000-3000 AED/month) for both weekdays (~28.3%) and weekends (~21.9%). The Urban populations tended to be unevenly distributed towards transport usage patterns as public-transport use tended to skew towards lower-income brackets. The Abu Dhabi population exhibited similar trends when analysed for sociodemographic variables, as shown in Figure 4, with the majority of users from the lower-middle-income bracket (1000-3000 AED/month) for both weekdays (~28.3%) and weekends (~21.9%). The results were also characterised by the observance that most users (~42%) for both also reported that they either did not own a car or could not afford to travel by taxis. Results displayed in Figure 4 also show that, regardless of bus travel frequency, respondents highlighted traffic congestion as the main obstacle. This finding may further extend the range of critical public-transport service attributes to include not only the quantitative travel time attribute, as also noted heavily in the literature [34,37], but also the qualitative perceived traffic congestion variable, which is comparatively less explored.
Eng 2023, 3, FOR PEER REVIEW 10 results were also characterised by the observance that most users (~42%) for both also reported that they either did not own a car or could not afford to travel by taxis. Results displayed in Figure 4 also show that, regardless of bus travel frequency, respondents highlighted traffic congestion as the main obstacle. This finding may further extend the range of critical public-transport service attributes to include not only the quantitative travel time attribute, as also noted heavily in the literature [34,37], but also the qualitative perceived traffic congestion variable, which is comparatively less explored. The variable description for the legend is described in Table 1.
---
Hypothesis Model Improvement Tests across TSC, TA, and SDV Blocks
The previous sections exhibited that the inclusion of income profile, service frequency, and perceived traffic congestion were significant variables influencing publictransport uptake over competing modes (particularly private cars), in addition to the conventional accessibility, quality, and travel time attributes. This was further investigated in the regression modelling stage as four different models were tested with two primary objectives: if inclusion of variables improved model fit, and which parameter was a strong predictor. In the case of network coverage satisfaction, compared with the null hypothesis, adding transport service characteristics (TSC) variables (see Table 1) improved the model, as the -2 log likelihood decreased (weekday: χ 2 = 253.74, p < 0.0001; weekend: χ 2 = 259.79, p < 0.0001), showing relatively good fit (weekday: ρ 2 = 0.132; weekend: ρ 2 = 0.142).
Further addition of travel attribute variables improved model fit as the -2LL further decreased (weekday: χ 2 = 333.801, p < 0.0001; weekend: χ 2 = 288.979, p < 0.0001), also improving the goodness of fit (weekday: ρ 2 = 0.173; weekend: ρ 2 = 0.158). When both variable The variable description for the legend is described in Table 1.
---
Hypothesis Model Improvement Tests across TSC, TA, and SDV Blocks
The previous sections exhibited that the inclusion of income profile, service frequency, and perceived traffic congestion were significant variables influencing public-transport uptake over competing modes (particularly private cars), in addition to the conventional accessibility, quality, and travel time attributes. This was further investigated in the regression modelling stage as four different models were tested with two primary objectives: if inclusion of variables improved model fit, and which parameter was a strong predictor. In the case of network coverage satisfaction, compared with the null hypothesis, adding transport service characteristics (TSC) variables (see Table 1) improved the model, as the -2 log likelihood decreased (weekday: χ 2 = 253.74, p < 0.0001; weekend: χ 2 = 259.79, p < 0.0001), showing relatively good fit (weekday: ρ 2 = 0.132; weekend: ρ 2 = 0.142).
Further addition of travel attribute variables improved model fit as the -2LL further decreased (weekday: χ 2 = 333.801, p < 0.0001; weekend: χ 2 = 288.979, p < 0.0001), also improving the goodness of fit (weekday: ρ 2 = 0.173; weekend: ρ 2 = 0.158). When both variable blocks were removed from the regression model and only the effect of SDV block was tested, the parameterised model showed a small improvement (weekday: χ 2 = 4.94, p < 0.0001; weekend: χ 2 = 3.56, p < 0.0001) while the McFadden ρ 2 also decreased. As can be anticipated, adding all three variable blocks simultaneously in the regression equation produced adverse effects on model fit (Table 2). The results show that, while both TSC and TA variable blocks were significant predictors of a respondent's satisfaction with transport network coverage, and even though some SDVs may have also been successful in prediction, their effect may have been nullified once TSC and TA variables were present in the logistic regression equations, showing that postulating the perceived congestion and frequency for estimating accessibility and network coverage variables improved the prediction abilities of the model.
Mode choice models exhibited slightly different behaviour to the network coverage models, where similar effects of the SDV model and expansion of the "TSC and TA model" to include SDV block were found for the weekend data. On the other hand, models based on weekday data tended to display optimum fitness for the final models that included all three variable blocks. For example, models investigating the frequency of bus travel found that the -2LL of the parametrised model containing all three variables was lower than the null hypothesis (weekday: χ 2 = 76.14, p < 0.0001; weekend: χ 2 = 89.509, p < 0.0001), supported by a higher goodness of fit (weekday: ρ 2 = 0.052; weekend: ρ 2 = 0.053). Although the McFadden ρ 2 was comparatively lower for mode choice models, the values were higher for parameterised model with three variable blocks.
---
Conclusions
The analysis results of the collected urban travel survey exhibited that travel attributes, especially service frequency, closeness to trip origin/destination, and traffic congestion, as well as characteristics of the transportation system, are predictors of the accessibility, network coverage, service quality, and by extension, the mode choice. This shows that, while optimising public-transport services, particularly in the multicultural, developed infrastructure yet car-centric context of the rapidly developing countries in the Gulf Cooperation Council, it may not be sufficient to limit the definition of accessibility to extending network coverage or service quality to onboard seating or bus-stop quality, as the perception of a more comprehensive network itself may be affected by underlying variables of trip purpose, sociodemographic characteristics, traffic congestion while travelling on public transport, and service frequency, in addition to the more convention ride quality, onboard crowding, and travel time variables.
The regression results for the CAPI-based questionnaire survey data responses of urban Abu Dhabi residents showed that, within the TSC block, distance of a traveller from the bus station was comparatively unimportant, although past research covered in Section 1 noted it as a significant factor. Comparisons of different variable blocks in regression models supported by objective responses of travellers showed that, across all datasets, network coverage satisfaction was reported to be only influenced by the TA and TSC blocks, where increasing congestion and frequency of buses correlated with traveller satisfaction. When mode choice behaviour was evaluated, expanded models containing all three variable blocks were more suited to explain the survey responses.
The findings of this study are important for gaining useful information about the perceived importance of several factors in the functionality of a public-transport system as postulated by the system users. This Abu Dhabi-based study suggests that there may be a hypothesised relationship between the ultimate decision of a user to travel via urban public-transport network instead of private vehicles and its attributes. Further research conducted in the field may be more supportive of this association between variables. At this stage, it should be noted that, although this study is one of the few studies analysing the sociodemographic trends and public-transport usage situation in the car-centric traveller mode choice situation of the Gulf Cooperation Countries, the United Arab Emirates, and particularly the City of Abu Dhabi, there are several shortcomings and limitations of the study that can be addressed in future work. Firstly, this study only collected responses about public bus transport usage compared to car use and did not consider a mixed-mode transit option where multiple competing public-transport modes can be compared to cars as the preferred mode choice. Secondly, it did not consider first-and last-mile choices, and responses captured relative to the provision of micro-mobility options supporting a large-scale public-transport network were not considered. This might greatly affect the tendency of respondents to lean towards private or public transport regardless of frequency or network coverage, as micro-mobility integration might bridge gaps in the current system.
It may also be noteworthy that some interaction between different variables and curvilinearity may also exist, which this study did not address. These shortcomings are acknowledged by the authors, and we aim to address them in future research. Nonetheless, this study showed that a future public-transport system needs to target the adverse effects of traffic congestion and crowded buses, as well as improve the quality of ride and increase the frequency of buses on the investigated travel routes. As such, investment decisions taken by stakeholders in public-transport agencies should consider the attributes of the trip, as well as the characteristics of the transportation system itself.
---
Data Availability Statement: Data will be provided upon request.
---
Conflicts of Interest:
The authors declare no conflict of interest.
---
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/eng4020066/s1. |
The Sewol ferry disaster severely shocked Korean society. The objective of this study was to explore how the public mood in Korea changed following the Sewol disaster using Twitter data. Data were collected from daily Twitter posts from 1 January 2011 to 31 December 2013 and from 1 March 2014 to 30 June 2014 using natural language-processing and text-mining technologies. We investigated the emotional utterances in reaction to the disaster by analyzing the appearance of keywords, the human-made disaster-related keywords and suicide-related keywords. This disaster elicited immediate emotional reactions from the public, including anger directed at various social and political events occurring in the aftermath of the disaster. We also found that although the frequency of Twitter keywords fluctuated greatly during the month after the Sewol disaster, keywords associated with suicide were common in the general population. Policy makers should recognize that both those directly affected and the general public still suffers from the effects of this traumatic event and its aftermath. The mood changes experienced by the general population should be monitored after a disaster, and social media data can be useful for this purpose. | Introduction
On 16 April 2014, the ferry Sewol, which was carrying 476 people including 325 high school students on a school trip, capsized and sank off the southwestern coast of South Korea. This disaster left more than 300 people dead, injured, or missing. The sinking of the Sewol severely shocked Korean society. Since the accident, it has been suggested that the public can be traumatized by indirect exposure to certain events through various media [1]. In fact, the scene in which the ferry capsized and sank as crew members were saved, leaving most passengers on board, was broadcast live. The public was repeatedly exposed to this scene for several weeks.
Early studies have consistently found that a disaster can lead to substantial mental health consequences, including post-traumatic stress disorder (PTSD). However, most of what is known about the mental health consequences of disasters has been derived from studies of focal groups of individuals who were directly exposed to the trauma, such as victims, their families, rescue/recovery workers, volunteers, and the communities in which they live [2]. Relatively few empirical studies have examined the effects of a major disaster on the mental health of the general population. At the same time, interest in public mental health has increased since the 11 September 2001 terrorist attack in New York City. Most data in this domain are derived from studies assessing the reactions of the general public in the US since the September 11 attacks [3,4]. These studies provide evidence of an association between indirect exposure to disaster through media and short-term PTSD-like symptoms [3]. This association was identified by analyzing data from representative samples and retro/prospectively collected social survey data. The data collection and assessments of mental health effects were performed several months to many years after the disaster. This lapse of months or years may cause biases in psychological research because retrospective studies are influenced by recall bias and the emotional state at the time of assessment [2,5]. Therefore, these approaches are not effective ways to monitor public mental health for purposes of real-time surveillance or intervention.
Accumulating evidence regarding psychological sequelae and the mechanisms associated with the emotional modulation of cognition suggest that vulnerability to disruptions in emotional equilibrium may be a common denominator of mental disorders [6]. It is therefore reasonable to assume that moods, long-term patterns of emotional states, can reflect mental health. In Korea, which is characterized by a consumer economy, the public mood was reflected in the substantial reduction in consumption following the Sewol ferry disaster [7]. Although consumer behaviors are among the most meaningful indirect indicators of the public mood [8], ways to directly monitor this mood would be preferable. Recently, several studies have suggested new methods for measuring public mood using social media data. It has been suggested that the analysis of social media data, such as weblog texts or documents, may be a useful way to identify the public mood [8].
This study presents a pragmatic simple method for monitoring the public mood using social media data, especially Twitter. This approach may be a better way to identify the post-disaster emotional reactions in the general population in that it permits tracking of public moods through the use of Twitter data. We use Twitter data to explore how Koreans' public mood changed following the Sewol disaster and offer suggestions based on our findings.
---
Methods
---
Data Sources
Social media data were collected from daily Twitter posts from 1 January 2011 to 31 December 2013 and from 1 March 2014 to 30 June 2014 using the social media analysis tool, SOCIALmetrics™(Daumsoft, Seoul, Korea)./ The SOCIALmetrics TM system contains social media data crawlers that collect posts from Twitter. The system also processes text using state-of-the-art natural language processing (NLP) and text mining technologies (Figure 1). The NLP module divides input text into sentences and segments the word forms contained in each sentence into a string of morphemes. The segmented morphemes are grouped into syntactic units via syntactic analysis. Once syntactic units are constructed, expressions denoting named entities such as people, locations, and organizations are recognized. Then, association analysis is performed to identify tuples of <topic keyword, associated keyword>. Finally, sentiment polarities for topic keywords are determined through sentiment analysis. The results of the whole analysis are delivered in a time-series fashion using an application programmer's interface (API) engine to accommodate various queries from users. The SOCIALmetrics TM system provides one of the most advanced solutions for the Korean language crawling and mining. Unlike English, Natural Language Processing in Korean is much more complicated. This is due to the fact that the Korean language exhibits characteristics of an agglutinative language and thus there has to be more than one morpheme in order to form a phrase. In the case of the English language, one morpheme is not separated as each word contains a single morpheme; however, the complexity of the Korean language is especially high as morphemes that construct a phrase have to be separated and each morpheme's part of speech also has to be distinguished. In addition, a Korean word or phrase can carry a very different meaning when used in different linguistic contexts. In order to solve these challenges, SocialMetrics TM utilizes an extensive semantic classification dictionary that contains over 1 million words. The morpheme and phrase analysis used and developed by SOCIALmetrics TM applies a technological method that extracts keywords, going beyond the process of merely selecting simple words. The Twitter crawler utilizes a streaming API [9] for data collection using the so-called "track keywords" function. We tracked several thousand keywords that were empirically selected and tuned to maximize the coverage of the crawler operating in near real-time fashion. We estimated that the daily coverage of the Twitter crawler was over 80%. The collected posts were fed into a spam-filtering module that checks for posts containing spam keywords related to pornography, gambling and other advertising. The lists of spam keywords and spammers were semi-automatically monitored and managed. There is no information that could potentially reveal the identity of social media user, namely user confidentiality is maintained.
---
Keyword Selection
---
Human-Made Disaster-Related Keywords
A traumatic event elicits a range of negative emotional reactions, including anger, anxiety, and sadness [10,11]. Emotional reactions following human-made disasters tend to be more focused on anger because there are policies and people to blame [11,12]. The Sewol disaster was a human-made accident; indeed, the President apologized, the Prime Minister resigned, and all crew members were arrested. Thus, we examined the emotional utterances in reaction to the disaster by analyzing the appearance of three words, "anger," "anxiety," and "sadness" on Twitter, focusing especially on anger.
---
Suicide-Related Keywords
The population-level suicide risk after disasters may be estimated by tracking the specific mood states associated with suicide using social media data. One previous study suggested that specific variables such as suicide-related and dysphoric weblog entries are significantly associated with national suicide rates [8]. The specific mood states associated with suicide can be examined by identifying the emotional words that usually appear with the word "suicide." We investigated the emotional words most likely to be associated with the Korean word jasal (suicide) and wooul (depression) using the accumulated tweets submitted to Twitter during the past three years as our database (1 January 2011, to 31 December 2013). Thus, association analysis was performed to identify tuples of topic keyword and associated keyword. Depression-related words were considered along with suicide-related words because depression, similar to PTSD, increases distress and dysfunction over time following traumatic events [13]. Additionally, it is well known that depression is a major risk factor for suicide.
---
Generating the Keywords Time Series
Based on the human-made disasters-related keywords and suicide-related keywords, we generated the keyword time series, defined as the daily volume of tweets mentioning these keywords. First, we processed texts collected from Twitter using state-of-the-art natural language-processing (NLP) and text-mining technologies. The NLP module divides input text into sentences and segments the word forms contained in each sentence into a string of morphemes. Second, the segmented morphemes were grouped into syntactic units via syntactic analysis. Finally, the volumes were analyzed for every 100,000 daily Twitter posts mentioning the Korean words hwa (including bunno) (anger), bulan (anxiety), seulpeum (sadness), chunggyeok (shock), seuteureseu (stress), gotong (suffering), bigeuk (tragedy), bulan (anxiety), jeolmang (despair), bunno (anger) and apeum (pain/hurt) in a time-series fashion. All these volumes were normalized.
---
Results
Public mood trends were based on daily tweets that reflect public responses to the South Korea ferry disaster. Figure 2 shows the process by which negative emotions unfolded, comparing comments before and after the Sewol ferry disaster. The disaster was immediately followed by emotional reactions on the part of the public, with expressions of anger and sadness substantially increasing following the disaster compared with the rates before the disaster. The number of posts mentioning anger and sadness sharply increased during the five days after the disaster. Even though the frequencies of these emotional words gradually decreased after 20 March 2014, their levels during the month following the disaster were notably higher than at baseline. In particular, the number of posts mentioning anger was much higher than were those mentioning anxiety and sadness during the tracking period. Furthermore, expressions of anger rapidly and sharply increased again when specific events related to the disaster occurred. The peak dates (A-F) for anger and brief descriptions of the most important events are included in the figure below (Figure 2). The suicide-related keywords, we identified by association analysis, are presented in Figure 3. The suicide-related keywords include several emotional words, such as chunggyeok (shock), seuteureseu (stress), gotong (suffering), bigeuk (tragedy), bulan (anxiety), jeolmang (despair), bunno (anger), and apeum (pain/hurt) (Figure 3). Individual words inside the blue circle are the words associated with suicide, and the word "Depression" is one of these words. The link is defined by association. These data were collected from daily Twitter posts between January 2011 and December 2013.
Figure 4 shows the trends in suicide-related words other than anger and anxiety in the general population before and after the Sewol ferry disaster. Chunggyeok (shock), seuteureseu (stress), gotong (suffering), bigeuk (tragedy), bulan (anxiety), jeolmang (despair), bunno (anger) and apeum (pain/hurt) were the target keywords most frequently associated with jasal (suicide) and wooul (depression) among the millions of tweets submitted to Twitter during the past three years (1 January 2011 to 31 December 2013). Surprisingly, the disaster led to immediate reactions in terms of suicide-related postings. The frequencies of all suicide-related keywords fluctuated greatly during the month following the disaster. Although we observed distinct differences in the emotional dynamics over time, the levels of all emotions were much higher during the month following the disaster than at baseline (Figure 4).
---
Discussion
---
Human-Made Disasters and Negative Emotional Reactions
We have found that the Sewol ferry disaster caused negative emotional reactions of the public. The pattern of a short-term negative emotional reaction to a human-made disaster followed by its gradual attenuation is generally consistent with previous research findings. Those studies documented a gradual decline over the course of a few months in the PTSD-like symptoms or other stress reactions among members of the general population who experienced the trauma indirectly through media reports [4,5,14]. Additionally, in the early period, the number of posts referring to anger was much higher than those of posts referring to sadness and anxiety, which is also consistent with previous studies of humanmade disasters [11,12]. We also found that public anger was easily provoked by various events that occurred in the aftermath of the disaster, such as a report on the exacerbation of the tragedy by the government's incompetence, which elicited a large-scale reaction. These findings suggest that the dynamics of emotional arousal and coping in a general population after a disaster can be identified through real-time monitoring of specific emotional words appearing on social networking services such as Twitter.
---
The Sewol Ferry Disaster and Suicide-Related Postings
There is no a general consensus regarding the relationship between disasters and suicide risk. Moreover, most studies of suicide in the aftermath of disasters have focused on natural disasters [15,16], although one study of the aftermath of September 11 found no significant effect of the disaster on the suicide rate of the general population [17]. Links between traumatic events such as human-made disasters and national suicide risk require further research.
The finding of this study suggests that, at least in Korea, where the suicide rate is generally high, a human-made disaster can lead to an immediate increase in the suicidal preoccupation of the general public. Given that one of the most striking features of contemporary Korean society is its high and increasing suicide rate [18], our findings may have major implications for the national suicide risk in South Korean society after the Sewol ferry disaster. It is clear that the use of social media data to identify the moods most likely associated with suicide can be a much faster and easier approach than traditional methods for estimating the suicidality of the general public after disasters or traumatic events.
---
Lessons, Possibilities, and Further Challenges
Even people who are not directly involved in a disaster may nonetheless be affected by it through various channels, such as repeated news reports on the disaster on television or other media. Policy makers need to remember that the general population does not emerge unscathed from traumatic events, and the aftermath of these events should be the target of monitoring and intervention. Previous studies have noted the challenges to identifying the common characteristics of those affected by a disaster [5]. Historically, people directly associated with disasters were considered vulnerable, but this study suggests that everyone in society may be vulnerable to such events.
Social media such as Twitter, blogs, and Facebook can be venues for the expression of personal emotions. Once accurate filters and classifiers are developed, these media offer novel opportunities for policy makers to monitor the mental health of the general population by tracking the public mood at any time by analyzing posted texts. Although an initial and well-known example of utilizing social media data for gauging the public mood came from the prediction of box office receipts [19] and stock markets [20], this methodology is being applied in various health-related research fields by tracking the usage of keywords among users of social media services, such as estimating general happiness/subjective wellbeing [21], influenza outbreaks, [22] and national suicide numbers [8]. Our recent findings have implications for improving research regarding moods and mental health following disasters. Tracking public moods through social media as well as self-assessments of mental states using surveys or physician-reported health records can provide links between traumatic events and mental health responses. Furthermore, it may be more pragmatic to use social media to monitor public mental health for purposes of real-time surveillance or intervention than to use these traditional means. This approach is more immediate and efficient in terms of cost and time than are conventional approaches relying on surveys. Moreover, information from data collected continuously rather than from cross-sectional or sequential designs is more useful for both understanding public mood generation over time and identifying the major determinants of changes in the public mood after traumatic events. Ultimately, these approaches may help policy makers or government agencies find better ways to treat negative public mood or prevent suicide.
In terms of the future, there is no doubt that social media data can play a foundational role as information sources regarding public health [23,24]. We also suggest that social media data regarding the emotions, thoughts, and desires of individuals offer opportunities to monitor public moods and perspectives through which the mental health and moods of the public can be understood. However, the following issues required additional clarification: "How close to the truth are the data provided by social media?" and "How do we make social media data more dependable?" That is, the development of empirical justifications for knowledge derived from social media and the design of sophisticated methodologies for analyzing data derived from social media are challenges for the future.
---
Conclusions
Policy makers should recognize that both those directly affected and the general public still suffers from the effects of this traumatic event and its aftermath. The mood changes experienced by the general population should be monitored after a disaster, and social media data can be useful for this purpose.
---
Author Contributions
Youngtae Cho, Hyekyung Woo, and Eunyoung Shim contributed to designing the study, and Hyekyung Woo, Kihwang Lee and Gilyoung Song have collected the data. Hyekyung Woo and Youngtae Cho carried out the statistical analyses, interpreted the results and drafted the manuscript. All the authors critically reviewed the manuscript and approved the final version.
---
Conflicts of Interest
The authors declare no conflict of interest. |
Poverty incidence in Namibia is higher amongst female-headed households (46%) compared to maleheaded households (41%). However, this situation is further worsened by females in households increasingly being forced to play multiple, conflicting roles after losing their spouses, and to work in marginal, part-time, informal and low-income jobs due to their lack of access to high-paying jobs, while having to take care of children, siblings and sometimes parents with no form(s) of assistance. In this study, a cross-sectional quantitative study design of the 2015/16 NHIES and an ordinal probit model was used to examine the household characteristics that contribute to poverty among female-headed households in Namibia, as well as their effects on the households' poverty levels. Results from this study showed that characteristics such as region (p<0.001), main language spoken at home (p<0.001), main source of income (p=0.009), location (p=0.016), and highest level of education (p=0.005) had significant associations with the household poverty levels. Additionally, female-headed households in the urban areas in the Hardap, Otjozondjupa and Zambezi regions, whose main languages spoken were English, German, Zambezi and other languages, with tertiary education and main source of income from commercial farming and other sources were less likely to be severely poor and more likely to be not-poor. Therefore, it is recommended that the Namibian government and policymakers further improve the livelihood of women, especially those heading households in other regions, in terms of a comprehensive social development strategy that covers the immediate needs for short-term and long-term needs of these women. | INTRODUCTION
Poverty is a condition in which an individual/household lacks the financial resources and essentials (such as shelter, clothing, clean water and food), access to education, healthcare and transport (Okalow, 2022). In 2021, it was estimated that 9% (698 million people) of the global population lived in extreme poverty (i.e., living on less than US$1.90 a day), while over one-fifth (1,803 million people) of the global population lived below US$3.20 and over two-fifths (3,293 million people) lived below US$5.50 a day (Suckling, Christenes & Walton, 2021). Poverty can be measured by two levels, namely absolute poverty and relative poverty. Absolute poverty is used to describe a condition where an individual/household is unable to meet basic human needs, including food, safe drinking water, sanitation facilities, health, shelter and education (Okalow, 2022). This level of poverty varies from country to country depending on how poor or rich a country is and with each country setting its own standard or measure regarding poverty. On the other hand, relative poverty is a condition where an individual/household receives less than half of what average individuals/households get to sustain themselves, although not enough to meet their basic needs (Habitat for Humanity, 2017). This level of poverty does not remain constant but can improve when the economy of a country does better which in turn affords citizens the same standard of living and reach their full potential.
In Namibia, an individual/household is classified as poor if 60% or more of the individual/household's total consumption was spent on food (Namibia Statistics Agency [NSA], 2018). This classification was further expanded on to classify an individual/household as severely poor or just simply poor using the food poverty line estimated from the 2015/16 Namibia Household Income and Expenditure Survey (NHIES) at N$293.10 with a rate of US$1:N$13.23 (as of October 2018). Here, the food poverty line was defined as the cost of a basket of food with minimum recommended nutritional intake. Although food poverty lines are often considered the most extreme measurement of monetary deprivation since the cost of non-food essentials are not included in their estimation and are mostly estimated from household surveys of the country under consideration, the threshold of food poverty line varies depending on the local cost of food and consumption behaviours per country. For Namibia, the food poverty line was estimated with a lower and upper bound estimate of N$389.30 and N$520.80 respectively (NSA, 2018). This means that if an individual/household is unable to spend at least N$520.80 per month on basic needs, such individual/household was considered to be poor, while if an individual/household is unable to spend at least N$389.30 per month on basic necessities, such individual/household was considered severely poor.
Comparing poverty levels between men and women, the United Nations (2023) estimated that one-third (i.e., 33.5%) of employed women were living in poverty in 2019 compared with 28.3% of employed men in least-developed countries, with the World Bank (2020) concluding that the conditions associated with poverty affect nearly 46% of the world's population, with women representing the majority of the poor in most regions. Although the gender gap is less sharp in Europe, Central Asia, Latin America and other high-income economies, it is at its peak in developing regions such as East Asia & the Pacific, South Asia and Africa, leading to an over-representation of women among the poor globally (World Bank, 2020). In addition, while Europe and Central Asia, Latin America and the Caribbean, and other high-income economies have low female poverty rates among young people, East Asia & the Pacific, South Asia and Sub-Saharan Africa were reported to have high female poverty rates. However, with the Coronavirus disease (COVID-19) crisis having a disproportionate impact on people's livelihoods, it is likely to worsen these poverty rates findings. Furthermore, it has also become evident that despite being poorer than men, women also face managing their households on their own due to changes in the social setup of societies. To be precise, females in households are now forced to play multiple, conflicting roles after losing their spouses, and have to work in marginal, part-time, informal and low-income jobs due to their lack of access to highpaying jobs (Lebni et al., 2020). Sadly, changes in demographic and population characteristics, social norms and the nature of family structure all appear to be encouraging female headship (Milazzo & Van de Walle, 2017).
A female-headed household can be defined as a household where a woman oversees and manages the family as a result of divorce, separation, immigration or widowhood (Javed & Asif, 2011). In many developing countries, there has been a significant increase in the percentage of female-headed households, with majority of these women being widowed and to a lesser extent divorced or separated, while in the developed countries, most female-headed households consist of women who never married or were divorced (World Bank, 2023). The association between the feminization of poverty and household headship comes from the idea that female-headed households represent an unbalanced number of the poor, and that they experience greater extremes of poverty than male-headed households, which further results in gender inequality (Milazzo & Van de Walle, 2017). Mwangi (2017) assessed the impact of poverty on female-headed households in Kangemi, Kenya and concluded that female-headed households experience stigma and exclusion arising from poverty and marital status, while the impact of poverty among women was felt in the pervasiveness of social problems such as child labour, prostitution and unwanted teenage pregnancies. Female-headed households were further impacted by poverty because of the traditional gender inequalities that serve to justify and maintain socioeconomic inequalities, prompting Mwangi (2017) to conclude that there was a direct link between poverty and female-household headship. Furthermore, it has been reported that female-headed households more often face gender discrimination with respect to education, earnings, rights and economic opportunities due to women being more vulnerable to poverty and lacking basic necessities as well as access to economic empowerment avenues such as access to credit facilities for business or agriculture expansion (Mwangi, 2017). Moreover, women and girls are disproportionately affected by poverty and many have little or no say in the decisions which affect their lives. They often suffer from gender-based violence, social exclusion and child abuse, and are disproportionately affected by poor health and sanitation, with many having little or no money of their own which makes them more dependent on others (Akokuwebe, 2015;Ambroggi et al., 2015;Mwangi, 2017;Health Poverty Action, 2018;Alarcón & Sato, 2019;United Nations Populations Fund, 2020;Okafor & Borchelt, 2022).
Despite several re-distributive measures and social protection programs put in place by the Namibian government, high inequality continues to be evident in the country, reflecting a historical legacy of inequality of opportunity (World Bank, 2022). According to NSA (2021), 43.3% of Namibia's population live in multidimensional poverty where an individual or persons can suffer multiple disadvantages at the same time such as poor health or malnutrition, a lack of clean water or electricity, poor quality of work or little schooling, with this poverty higher among female-headed households (46%) than in maleheaded households (41%). Thus, women more than men are poorer, yet this situation is further worsened by women being alone, having to take care of children, siblings and sometimes parents with no form(s) of assistance. As a result, an increasing number of female-headed households in developing countries, including Namibia, are emerging due to economic changes, economic downturns and social pressures among others (Indexmundi, 2019). To date, quite a score of studies have been done on poverty in Namibia. However, factors contributing and influencing poverty levels, especially among female-headed households in the country, still need to be sufficiently explored. In addition, five of the 14 regions in Namibia were reported to be headed by females during the 2015/16 NHIES period, namely the Omusati (58.3%), Ohangwena (57.5%), Oshana (52.4%), Zambezi (51.8%) and Oshikoto (50.8%) regions, with increased likelihood of being poor. This, therefore, raises questions about what might be accounting for these over-representation of female-headed households in official accounts of poverty in the country, and how this is plausibly changing (or not) over time. Moreover, the relationship between gender and poverty is a complex and debatable topic more than ever and thus a potential area for policy makers to focus on. For this reasons, this study was aimed at identifying the household characteristics that contributes to poverty among female-headed households in Namibia, as well as their effects on the households' poverty levels. Identifying these characteristics can be useful in the interrogation of the coping mechanisms that were put in place to reduce household poverty in the country, while findings from this study can further lead to the strengthening of policies with a possibility of incorporating them in poverty eradication programs countrywide, especially among female-headed households.
---
DATA AND METHODS
---
Research design
The study followed a cross-sectional quantitative research design using data extracted from the 2015/16 Namibia Household Income and Expenditure Survey (NHIES), the latest thus far in the country, obtained from the Namibia Statistics Agency. The NHIES is a household based survey, designed to collect data on income and expenditure patterns of households and the sole source of information on income and expenditure in the country. It is freely available to the public on the agency's website (www.nsa.org.na). The survey also serves as a statistical framework for compiling the national basket items for the compilation of price indices used in the calculation of inflation and forms the basis for updating prices or rebasing of national accounts, among others (NSA, 2018). The implementation of the 2015/16 NHIES was financed by the Government of the Republic of Namibia through the Ministry of Economic Planning Sectoral Budget. Technical support in the area of data processing, for example, the development of data entry and listing applications was provided by experts from the United States Census Bureau through funding by the United States Agency for International Development, while experts from the World Bank provided technical expertise during the sampling and data analysis stages (NSA, 2018).
---
Sampling design
The sample design used in the 2015/16 NHIES was a stratified two-stage cluster sampling, where the first stage units were geographical areas designated as the primary sampling units, while the second stage units were the households. The primary sampling units were based on the 2011 Population and Housing Census enumeration areas and for each primary sampling unit, 12 households were systematically selected. The primary sample frame was stratified first by region followed by urban and rural areas within region, and then the urban/rural strata were further stratified implicitly by constituencies. The rural strata were also further stratified implicitly taking into consideration the proclaimed villages, settlements, communal and commercial farming areas within the rural strata. As a result, a total of 864 primary sampling units were sampled in the survey (NSA, 2018). The households in the secondary sample frame were identified from the list of all households for each selected primary sampling units, while additional information were collected from the primary sampling units in the proclaimed villages, settlements, communal and commercial farming areas for the purpose of carrying out further stratification before selecting the sample households from there. Overall, the survey had a representative sample size of 10368 households from 864 sampled primary sampling units (NSA, 2018). More detailed information about the sampling design and methods as well as the entire survey can be found in the 2015/16 NHIES report, freely available online on the NSA website. The inclusion criteria for this study were all households headed by females as captured in the 2015/16 NHIES. Households with incomplete, non-response or missing information were excluded from this study. The individual households considered in this study were identified from the 2015/16 NHIES as per the inclusion criteria for this study.
---
Descriptive analysis
The household characteristics of the female-headed households considered in this study were region, age (in years), main language spoken, main source of income, location, highest level of education and number of household members as captured in the 2015/16 NHIES data. The individual households considered in this study were identified from the 2015/16 NHIES as per the inclusion criteria for this study. Moreover, during the 2015/16 NHIES period, each respondent was asked 'What is the main source of income for this household?' in order to determine the main source of income of their respective household. The obtained response was the household's own perception at the time of interview. Similarly, the annual consumption of a household interviewed in the 2015/16 NHIES was described using the total household consumption, average household consumption and the consumption per capita indicators (all measured in Namibia Dollars (N$)). For this study, in order to determine the respective poverty level of each interviewed household, the household's average monthly per capita consumption (i.e., average consumption per capita divided by 12) was used. In Namibia, the food poverty line for 2015/2016 was estimated with a lower and upper bound estimate of N$389.30 and N$520.80 per month, at a rate of US$1:N$13.23 (as of October 2018). Thus, using this poverty line, each household considered in this study was classified as follows:
Poverty level { More detailed information about the construction of the main source of income, annual consumption and the remaining household characteristics considered in this study can be found in the 2015/16 NHIES report, freely available online on the NSA website.
---
Statistical analysis
R software (version 4.2.2) was used for the data cleaning, variables recoding and data analysis. Pearson's chi-square test was performed to examine the association between the household characteristics and poverty levels, while the effect of the household characteristics on their respective poverty levels was determined using a multivariable ordinal probit regression model, considering the ordered nature of the poverty levels (notpoor, poor and severely poor). An ordinal probit regression model is used to estimate relationships between an ordinal dependent variable ( ) and a set of independent variables ( ) (Dopico, 2020) such that for { where is the vector of regression coefficients which needs to be estimated, is the error term, is the thresholds and is the number of mutually exclusive categories of (Johnston et al., 2020). In this study, was the households' poverty levels (notpoor, poor and severely poor), while was the household characteristics (region, age, main language spoken, main source of income, location, highest level of education and number of household members). Significant characteristics from the chi-square tests (p<0.05) were used in the fitted multivariable ordinal probit regression model.
---
RESULTS
---
Participants' profiles
A total of 4451 female-headed households were considered in this study as per the inclusion criteria of this study. As at 2015/16, these households had a yearly estimated (household) per capita consumption of N$83022.76, and a monthly per capita consumption of N$6918.56 on average, with an estimated median per capita consumption value of N$52018.35 and N$4334.86 respectively, as shown in Table 1. The highest number of female-headed households were recorded in the Omusati and Ohangwena regions within the rural areas, headed by a 60+ year old, among Oshiwambo speakers, had salaries/wages as their main source of income with a primary education and living with 1-3 household members as shown in Table 2. Of the 4451 female-headed households considered, 4432 (99.57%) were classified as not-poor, 11 (0.25%) were poor, while 8 (0.18%) were severely poor in 2015/16 as shown in Table 2. Majority of the female-headed households that were classified as not-poor were in the rural areas, in the Omusati and Ohangwena regions, headed by a 60+ year old, with a primary education, spoke Oshiwambo as their main language, had salaries/wages as their main source of income and living with 1-3 household members. Of the 11 female-headed households that were classified as poor, the highest were observed in the rural areas, in the Omaheke region, headed by a 60+ year old, with no formal education, spoke Nama/Damara language, with pension and living with 1-6 household members. Likewise, out of the 8 female-headed households that were classified as severely poor, the highest were recorded in the rural areas, in the Kunene region, headed by a 30-39 and 60+ year old, with no formal education, spoke Otjiherero language, living with 1-3 household members and living on a drought/in-kind receipts, pension, remittances/grants, subsistence farming as their main source of income.
---
Association examinations
From Table 2, at a 5% level of significance, household characteristics such as region (p<0.001), main language spoken at home (p<0.001), main source of income (p=0.009), location (p=0.016), and highest level of education (p=0.005) can be concluded to have a significant association with the household poverty levels. However, characteristics such as age (p=0.691) and number of household members (p=0.674) had no association. All the characteristics with significant associations were included in the fitted multivariable ordinal probit regression and the subsequent results shown in Table 3.
---
Effects on poverty levels
Not-poor vs. severely poor From Table 3(a), while holding other characteristics constant and with a significant p-value at a 5% level of significance, it can be concluded that the female-headed households in the //Karas (p<0.001), Kavango East (p<0.001), Kavango West (p<0.001), Khomas (p<0.001), Kunene (p<0.001), Ohangwena (p<0.001), Omaheke (p<0.001), Omusati (p<0.001), Oshana (p<0.001) and Oshikoto (p<0.001) regions were more likely to be severely poor and less likely to be not-poor, compared to those in the Erongo region. However, female-headed households in the Hardap (p<0.001), Otjozondjupa (p<0.001) and Zambezi (p<0.001) regions were less likely to be severely poor and more likely to be not-poor. Furthermore, female-headed households whose main language spoken were English (p<0.001), German (p<0.001), Zambezi (p<0.001) and other (p<0.001) languages were less likely to be severely poor and more likely to be not-poor, compared to those whose language spoken was Afrikaans. Likewise, female-headed households whose main source of income were from commercial farming (p<0.001) and other sources (p<0.001) were less likely to be severely poor and more likely to be not-poor, compared to those whose main source of income were from business income. Moreover, female-headed households in the urban areas (p=0.044) were less likely to be severely poor and more likely to be not-poor, compared to those in the rural areas, while female-headed households whose highest level of education were secondary (p=0.050), tertiary (p<0.001) and did not state their level of education (p<0.001) were less likely to be severely poor and more likely to be not-poor, compared to those who did not have formal education.
---
Not-poor vs. poor
From Table 3 (b), it can be concluded that the female-headed households in the Hardap (p<0.001), Kavango East (p<0.001), Omusati (p<0.001), Oshana (p<0.001), Oshikoto (p<0.001), Otjozondjupa (p<0.001) and Zambezi (p<0.001) regions were more likely to be poor and less likely to be not-poor, compared to those in the Erongo region, while those in the //Karas (p<0.001), Kavango West (p<0.001), Khomas (p<0.001), Kunene (p<0.001), Ohangwena (p<0.001) and Omaheke (p<0.001) regions were less likely to be poor and more likely to be not-poor. Furthermore, female-headed households whose main language spoken were German (p<0.001), Oshiwambo (p<0.001), Otjiherero (p<0.001), Rukavango (p<0.001), Zambezi (p<0.001) and other (p<0.001) languages were more likely to be poor and less likely to be not-poor, compared to those whose language spoken was Afrikaans, while those whose language spoken were English (p<0.001), Khoisan (p<0.001), Nama/Damara (p<0.001) and Setswana (p<0.001) were less likely to be poor and more likely to be not-poor. Moreover, female-headed households whose main source of income were from commercial farming (p<0.001), other sources (p<0.001), pensions (p<0.001), remittance/grants (p<0.001), salaries/wages (p<0.001) and subsistence farming (p<0.001) were more likely to be poor and less likely to be not-poor, compared to those whose income were from business income, while those whose income were from drought/in-kind receipts were less likely to be poor and more likely to be not-poor. Similarly, female-headed households in the urban areas (p<0.001) were more likely to be poor and less likely to be not-poor, compared to those in the rural areas, while femaleheaded households whose highest level of education were not stated (p<0.001), primary (p<0.001), secondary (p<0.001) and tertiary (p<0.001) were more likely to be poor and less likely to be not-poor, compared to those who did not have formal education.
---
Poor vs. severely poor
From Table 3 (c), it can be concluded that the female-headed households in the Kavango West (p<0.001), Khomas (p<0.001), Ohangwena (p<0.001) and Otjozondjupa (p<0.001) regions were more likely to be severely poor and less likely to be poor, compared to those in the Erongo region, while those in the Hardap (p<0.001), //Karas (p<0.001), Kavango East (p<0.001), Kunene (p<0.001), Omusati (p<0.001), Oshana (p<0.001), Oshikoto (p<0.001) and Zambezi (p<0.001) regions were less likely to be severely poor and more likely to be poor. Furthermore, female-headed households whose main language spoken were English (p<0.001), German (p<0.001), Nama/Damara (p<0.001), Oshiwambo (p<0.001), Rukavango (p<0.001), Zambezi (p<0.001) and other (p<0.001) languages were more likely to be severely poor and less likely to be poor, compared to those whose language spoken was Afrikaans, while those whose language spoken was Otjiherero (p<0.001) were less likely to be severely poor and more likely to be poor. Moreover, female-headed households whose main source of income were from commercial farming (p<0.001) were more likely to be severely poor and less likely to be poor, compared to those whose income were from business income, while those whose income were from drought/in-kind receipts (p<0.001), other sources (p<0.001), pensions (p<0.001), remittance/grants (p<0.001), salaries/wages (p<0.001) and subsistence farming (p<0.001) were less likely to be severely poor and more likely to be poor. Likewise, female-headed households in the urban areas (p<0.001) were less likely to be severely poor and more likely to be poor, compared to those in the rural areas, while female-headed households whose highest level of education were not stated (p<0.001), primary (p<0.001), secondary (p<0.001) and tertiary (p<0.001) were more likely to be severely poor and less likely to be poor, compared to those who did not have formal education.
---
DISCUSSION
In this study, a multivariable ordinal probit regression model was used to examine the household characteristics that contribute to poverty among female-headed households in Namibia, as well as their effects on the households' poverty levels. Majority of the female-headed households in Namibia during 2015/16 were recorded in the Omusati and Ohangwena regions, within the rural areas, headed by a 60+ year old, spoke Oshiwambo, had salaries/wages as their main source of income, and had a primary education.
Furthermore, household characteristics such as region, main language spoken at home, main source of income, location, and highest level of education had significant association with the household poverty levels, while characteristics such as age and number of household members did not. These findings are similar to those found in Biyase & Zwayne (2018) where it was concluded that the levels of education, region and location (urban/rural) were some of the main characteristics that were associated with poverty levels. However, the finding on household members contradicts Lekobane & Seleka (2017) who concluded that household size was related to the likelihood of falling into poverty since more resources were required to meet the basic needs of larger households.
Moreover, female-headed households in the urban areas in the Hardap, Otjozondjupa and Zambezi regions, whose main language spoken were English, German, Zambezi and other languages, with tertiary education and main source of income from commercial farming and other sources were less likely to be severely poor and more likely to be not-poor. However, those in the //Karas, Kavango East, Kavango West, Khomas, Kunene, Ohangwena, Omaheke, Omusati, Oshana and Oshikoto regions were more likely to be severely poor and less likely to be not-poor. These findings are not surprising as potential employers of government institutions and privately owned companies most often require their new employees and new recruits to be well-spoken in international friendly languages such as English, German and other African and European languages, while requiring a high(er) class of qualification attainment from them (Oyedele, 2022). Also, female-headed households in the //Karas, Kavango East, Kavango West, Khomas, Kunene, Ohangwena, Omaheke, Omusati, Oshana and Oshikoto regions still experience comparatively high inequality as well as less financial inclusion. Most often females in these regions tend to engage in jobs that are less-paying such as domestic works, sales and service works while their male counterparts tend to take up jobs that require more skills with high pay such as transportation works, mining and construction works. Likewise, majority of the females in the rural areas are left with no options than to engage in lesspaying jobs such as agriculture and farm works (like tending to livestock, ploughing, etc.), domestic works (like cooking, cleaning, washing, etc.) and caregiving works (for children or elderly persons). These findings are similar to Mwangi (2017) who concluded that female-headed households face gender discrimination with respect to earnings, rights and economic opportunities.
Comparing the poor to the non-poor, female-headed households in the urban areas in the Hardap, Kavango East, Omusati, Oshana, Oshikoto, Otjozondjupa and Zambezi regions, whose main language spoken were German, Oshiwambo, Otjiherero, Rukavango, Zambezi and other languages, and whose main source of income were from commercial farming, other sources, pensions, remittance/grants, salaries/wages and subsistence farming, with primary, secondary and tertiary education as their highest level of education were more likely to be poor and less likely to be not-poor. On the other hand, households in the //Karas, Kavango West, Khomas, Kunene, Ohangwena and Omaheke regions, whose language spoken were English, Khoisan, Nama/Damara and Setswana, with drought/in-kind receipts as their main source of income were less likely to be poor and more likely to be not-poor. This can be due to female-headed households in the northern regions struggle to find decent jobs where their main source of income is commercial farming. Also, female-headed households may not have collateral to secure loans in financial institutions or own means of production such as land. Thus, most engage in income generating activities such as a blend of small businesses (selling vegetables or second hand clothes in open markets and informal settlements), domestic works, and low-income casual jobs. These findings are similar to Lebni et al. (2020) who concluded that women have to work in marginal, part-time, informal, and low-income jobs due to lack of access to high-paying jobs among other factors. In addition, it is said that free primary and secondary education produces a more literate society, which in turn can lower the likelihood of individuals living in severe poverty. However, most often women do not receive high-paying jobs even though they are highly educated, as compared to their male counterparts. This further shows how the inequality and power balances pose a great barrier to female-headed households in Namibia as they serve to justify and maintain socioeconomic inequalities that disproportionately affect women. This is similar to Mwangi (2017) who concluded that female-headed households are linked to gender inequality issues as women were more vulnerable to poverty than men. Individuals from female-headed households with drought/in-kind receipts as their main source of income most often work on farms that focus on crop farming and occasionally receive donations from government, privately owned organizations and generous individuals, thus having a better chance of not being poor. This in turns marginally improves their household poverty levels, although not immensely.
Comparing the severely poor to the poor, female-headed households in the Kavango West, Khomas, Ohangwena, and Otjozondjupa regions, whose main language spoken were English, German, Nama/Damara, Oshiwambo, Rukavango, Zambezi and other languages, with commercial farming as their main source of income were more likely to be severely poor and less likely to be poor. However, households in the Hardap, //Karas, Kavango East, Kunene, Omusati, Oshana, Oshikoto and Zambezi regions in the urban areas, whose language spoken was Otjiherero, with drought/in-kind receipts, other sources, pensions, remittance/grants, salaries/wages and subsistence farming as their main source of income were less likely to be severely poor and more likely to be poor. These findings are not surprising as they can be due to the fact that female-headed households could have high debt due to hiring cost of agricultural machinery, marketing and distribution of produce. In addition, female-headed households in regions who mainly spoke Otjiherero depend on agriculture for their livelihood, although lack basic necessities such as health care and access to credit facilities and land ownership. These findings are similar to findings in Mwangi (2017) and Borchelt (2022), with Mwangi (2017) concluding that women lack access to economic empowerment avenues such as access to credit facilities for business or agriculture expansion and lack access to knowledge and technologies in these industries, while Borchelt (2022) concluded that a woman's health affects her household's economy, where her inability to work due to hospitalization or chronic illness could reduce her income thus increasing the likelihood of falling into poverty.
---
CONCLUSION
With household characteristics such as region, main language spoken, location, highest level of education and main source of income having a significant impact on the female-headed households' poverty levels, it is therefore recommended that the Namibian government and policy makers put more efforts in improving the livelihood of women, especially those heading households in the //Karas, Kavango East, Kavango West, Khomas, Kunene, Ohangwena, Omaheke, Omusati, Oshana and Oshikoto regions, in terms of comprehensive social development of strategy that covers the immediate needs for short term and long-term needs of these women. This can be achieve through: (i) government ministries' as well as relevant poverty eradication organizations' continuous strengthening of the national poverty eradication measures put in place in the country, (ii) introducing programs targeted to benefit women so that they can escape (moderate to severe) poverty and not be subjected to poverty, and (iii) incorporation of social services and programs to bring focus on building capacity of women through education, life skills and business training to eradicate poverty, most especially in the Otjiherero, Rukavango and Zambezi speaking female-headed households in the //Karas, Kavango East, Kavango West, Khomas, Kunene, Ohangwena, Omaheke, Omusati, Oshana and Oshikoto regions. Also, further studies on this topic is recommended with: (i) a multidimensional household poverty definition using data from the next NHIES, pending availability of funds from the sponsors, that would be incorporating a multidimensional poverty concept and considering more relevant variables such as place of work, duration of employment, COVID-19 effect, household indebtedness (to mention a few), and (ii) a longitudinal study that will examine the same household individuals to detect any changes that might (have) occur over a (specified) longer period of time.
---
LIMITATIONS
The 2015/16 NHIES key poverty indicators preliminary report contains no sex disaggregated data on poverty, which meant that the most recent poverty profile by sex came from the 2009/10 NHIES. Also, being an household based survey, people who were homeless and those who usually resided in private households but were in hospital, prison and school hostels during the time of data collection of the 2015/16 NHIES were excluded as well as those in institutions such as correctional institutions/police cells, old age homes, army and police barracks/camp/ships in harbour, child care institutions/orphanages, hospital, hotels and church/convent/monastery/religious retreats. Furthermore, there is a possibility that interviewed respondents of the NHIES did not give their true annual (household) consumption during the survey, seeing as personal income and expenditure are two of the most sensitive information to share with non-household members. Moreover, although the 2015/16 NHIES defined any person who is not able to spend at least N$389.30 on essentials needs as severely poor and a person who is not able to spend at least N$520.80 as poor, these definitions does not necessarily reflect today's economic reality, especially with the high cost of living as well as the devastating effect of COVID-19 on the economy and people's livelihood. Likewise, even though the most latest nationwide representative data in Namibia was used for this study, the time between 2015/16 and today is acknowledged and might have brought about significant changes. Thus, findings about the geographical differences may have changed and interpretations must be made with cautiousness. downloadable from the NSA website. Additionally, this study followed all ethical standards for research without direct contact with human or animal subjects as there were no names of persons or household addresses recorded in the NHIES data.
---
CONFLICT OF INTEREST
The authors have no competing interests. Ethics Policy and Guidelines. Ethical approval was not sought for this study since the 2015/16 NHIES data used in this study is freely available on a public domain and
---
AUTHOR CONTRIBUTIONS
OO and RH researched literature and conceived the study. RH collected the data while OO conducted the data analysis and prepared the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version of the manuscript for publication. |
Agent-based models simulating social reality generate outputs which result from a complex interplay of processes related to agents' rules of interaction and model's parameters. As such agent-based models become more descriptive and driven by evidence, they become a useful tool in simulating and understanding social reality. However, the number of parameters and agents' rules of interaction grows rapidly. Such models often have unvalidated parameters that must be introduced by the modeler in order for the model to be fully functional. Such unvalidated parameters are often informed by the modeler's intuition only and may represent gaps in existing knowledge about the underlying case study. Hence, a rather long list of model parameters is not a limitation but an inherent feature of descriptive, evidence-driven models that simulate social complexity. Theoretical exploration of a model's behavior with respect to its parameters in particular those that are not constrained by validation is important but have been, until recently, limited by the lack of available computation resources and analysis tools to explore the vast parameter space. An agent-based model of moderate complexity will, when run across different parameters (i.e., the total number of configurations times the number of simulation runs) generates output data that could easily be on a scale of gigabytes and more. With high performance computing (HPC), it has become possible for agentbased modelers to explore their models' (vast) parameter space, and while generating |
this simulated 'big data' is becoming (computationally) cheaper, analyzing agent-based model's outputs over a (relatively) large parameter space remains a big challenge for researchers.
In this paper we present a selection of practical exploratory and data mining techniques that might be useful to understand outputs generated from agent-based models. We propose a simple schema and demonstrate its application on an evidence-driven agent-based model of inter-ethnic partnerships (dating and marriages), called 'DITCH' . The model is available on OpenABM 1 and reported by Meyer et al. (2014). In the analysis reported in this paper, we focus on the dynamics and interplay of the key model parameters and their effect on model output(s). We do not consider the model's validation in terms of the case studies on which it is based.
The next section ("Analyzing agent-based models: a brief survey" section) reviews selected papers that have previously addressed the issue of analyzing agent-based models. "A proposed schema combining exploratory, sensitivity analysis and data mining techniques" section present a general schema to analyze outputs generated by agentbased models and gives an overview of the exploratory and data mining techniques that we have used in this paper. In "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section, we present an overview of the DITCH agent-based model and discuss its parameters with their default values that have been reported by Meyer et al. (2014). This section also describes the experimental setup and results and finally, "Conclusions and outlook" section concludes with next steps in this direction.
---
Analyzing agent-based models: a brief survey
Agent-based models tend to generate large volumes of simulated data that is dynamic and high-dimensional, making them (sometimes extremely) difficult to analyze. Various exploratory data analysis (EDA) and data mining (DM) techniques have been reported to explore and understand a model's outcome against different input configurations (e.g., Villa-Vialaneix et al. 2014). These techniques include heat-maps, box and whisker plots, sensitivity, classification trees, the K-means clustering algorithm and ranking of model parameters' in influencing the model's outcomes.
Several papers have proposed and explored data mining techniques to analyze agentbased simulations. One such is by Remondino and Correndo (2006) where the authors applied 'parameter tuning by repeated execution' , i.e., a technique in which, multiple runs are performed for different parameter values at discrete intervals to find parameters that turn out to be most influential. The authors suggested different data mining techniques such as regression, cluster analysis, analysis of variance (ANOVA), and association rules for this purpose. For illustration, Remondino and Correndo (2006) presented a case study in which a biological phenomenon involving some species of cicadas was analyzed by performing multiple runs of simulations and aggregating the results. In another work, Arroyo et al. (2010) proposed a methodological approach involving a data mining step to validate and improve the results of an agent-based model. They presented a case study in which cluster analysis was applied to validate simulation results of the 'MENTAT' model. Their aim was to study the factors influencing the evolution in a Spanish society from 1998 to 2000. The clustering results were found to be consistent with the survey data that was used to initially construct the model. Edmonds et al. (2014) used clustering and classification techniques to explore the parameter space of a voter behavior model. The goal of this study was to understand the social factors influencing voter turnout. The authors used machine learning algorithms such K-means clustering, hierarchical clustering, and decision trees to evaluate data generated from the simulations. Recently, Broeke et al. (2016) used sensitivity analysis as the technique to study the behavior of agent-based models. The authors applied OFAT ('One Factor at a Time'), global, and regression-based sensitivity analysis on an agent-based model in which agents harvest a diffusing renewable source. Each of these methods was used to evaluate the robustness, outcome uncertainty and to understand the emergence of patterns in the model.
The above cited references are by no means exhaustive but provide some interesting examples of the use of data mining techniques in analyzing agent-based models. In the next section, we give an overview of some of the EDA and sensitivity analysis (SA) techniques used in this paper. "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section of this paper further discusses the EDA, SA and DM techniques vis-à-vis the analysis of simulated outputs of an agent-based model.
---
A proposed schema combining exploratory, sensitivity analysis and data mining techniques
We propose a schematic approach as a step towards combining different analysis techniques that are typically used in the analysis of agent-based models. We present a methodological approach to use exploratory, statistical and data mining techniques for analyzing the relationships between inputs and output parameters of an agent-based model. Applying the appropriate technique (or a set of techniques) to analyze a model's behavior and parameters sensitivity is the key to validate and predict any real word phenomena in an agent-based model. In "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section, we demonstrate the application of various exploratory data analysis, sensitivity analysis, and data mining techniques to understand the impact of various input parameters on the model output.
Figure 1 shows a schema that combines exploratory, statistical and data mining techniques to analyze outputs of agent-based models. We first begin with a broader, exploratory analysis of a selected model's input variables (parameters) to understand their effect on the given agent-based model's outputs. This is a typical way of understanding agentbased models, where a wider range of parameters are explored to visually see their relationship with the model outputs. Performing model sensitivity analysis follows next. With many input parameters, understanding outputs through eyeballing is difficult. Hence, techniques such as partial rank correlation coefficient (PRCC) help to measure 'monotonic relationships between model parameters and outputs' (Morino et al. 2008). The use of data mining techniques further allows to find patterns in the generated output across a wider range of a model's input parameters.
Next, we present an overview of some of the techniques that may be applied for each step in the schema, as shown in Fig. 1.
---
Exploratory data analysis
Data analysis in exploratory data analysis (EDA) is typically visual. EDA techniques help in highlighting important characteristics in a given dataset (Tukey 1977). Choosing EDA as a starting point in our proposed schema provides a simple yet effective way to analyze relationship between our model's input and output parameters. Graphical EDA techniques such as box and whisker plots, scatter plots, and heat maps (Seltman 2012) are often reported in the generated data from an (agent-based) simulation. Heat maps are (visually) often good indicators of patterns in the simulated output when parameter values change, whereas, the scatter maps are good often indicators to highlight association between two independent variables (model parameters) for a particular dependent variable (model output). Box and whisker plots on the other hand, summarize a data distribution by showing median, the inter-quartile range, skewness and presence of outliers (if any) in the data. Other techniques such as histograms and violin plots are used to describe the full distribution of an output variable for a given input parameter configuration(s) and are more descriptive than box and whisker plots (Lee et al. 2015).
In this paper, we used the ggplot2 package in R to generate heat maps and box and whisker plots for output variables against the most influential parameters having variations. The result as shown in "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section highlights the tipping points in heat maps where the percentage of dependent variable changes significantly. In order to explore the Fig. 1 A schema for analyzing outputs generated by agent-based (social simulation) models using a combination of exploratory, statistical and data mining techniques variation in output across the varying parameters, box plots were plotted for different parameter configurations. The results produced while plotting box plots can thus be used to identify the subset of a dataset contributing more in increasing the proportion of a target variable.
---
Sensitivity analysis
The purpose of performing sensitivity analysis is to study the sensitivity of input parameters of our ABM in generating the output variables, and thus, provide a more focused insight than exploratory analysis techniques. Several techniques may be used to perform sensitivity analysis. For instance, for the results reported in "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section, we performed multiple sensitivity analysis techniques such variable importance method, recursive elimination method, and PRCC (partial rank correlation coefficient).
Following step 2 of the proposed schema (Fig. 1), we identify two useful methods that are used in the analysis in "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section: variable importance and recursive feature elimination.
---
Variable importance
For a given output variable, ranking of each input variable (model parameter) with respect to its importance can be estimated by using model information (training data set). Variable importance thus quantifies the contribution of each input variable (parameter) for a given output variable. The method assumes a linear model, whereby the absolute value of each model parameter is used to train the dataset to generate importance of each input variable. In our case, we used caret package in R, which constructs a linear model by targeting a dependent attribute against the number of input attributes and then ranking with respect to their estimated importance.
---
Recursive feature elimination
The recursive feature elimination (aka RFE) method builds many models based on the different subsets of attributes using the caret package in R. This part of analysis is carried out to explore all possible subsets of the attributes and predicting the accuracy of the different attribute subset sizes giving comparable results.
---
Using data mining to analyze ABM outputs
There is a growing interest in the social simulation on the application of data mining techniques to analyze multidimensional outputs that are generated from agent-based simulations across a vast parameter space. In this section, we present an overview of some of the common datamining techniques that have been used to analyzed agentbased models' outputs.
---
Classification and regression trees
A classification/regression tree is based on a supervised learning algorithm which provides visual representation for the classification or regression of a dataset (Russell and Norvig 2009). It provides an effective way to generalize and predict output variables for a given dataset. In such trees, nodes represent the input attributes, and edges represent their values. One way to construct such a decision tree, is by using a divide-and-conquer approach to reach the desired output by performing a sequence of tests on each attribute node and splitting the node on each of its possible value. The process is repeated recursively, each time selecting a different attribute node to split on until there are no more nodes left to split and a single output value is obtained.
---
K-Means clustering
K-Means clustering is one of the widely implemented clustering algorithms and have been used to analyze agent-based models, e.g., Edmonds et al. (2014). It is often used in situations where the input variables are quantitative and a squared Euclidean distance is used as a dissimilarity measure to find clusters in a given dataset (Friedman et al. 2009). The accuracy of the K-means clustering algorithm depends upon the number of clusters that are specified at the initialization; depending upon the choice of the initial centers, the clustering results could vary significantly.
---
Illustration: implementing the proposed schema on the 'DITCH' agent-based model
In this section, we present an overview of the 'DITCH' agent-based model followed by a description of the experimental setup through which the data was generated. We then report analysis of the generated output using the techniques introduced in the previous section.
---
An overview of the DITCH agent-based model (Meyer et al. 2014)
We have used the DITCH ("Diversity and Inter-ethnic marriage: Trust, Culture and Homopily") agent-based model by Meyer et al. (2014Meyer et al. ( , 2016) ) for our analysis. Written in NetLogo,2 the model simulates inter-ethnic partnerships leading to cross-ethnic marriages reported in different cities of the UK and is evidence-driven.
Agents in the DITCH model are characterized by traits that influence their preferences for choosing suitable partner(s) over the course of a simulation run. The model assumes heterosexual partnerships/marriages within and across different ethnicities.
Agents' traits in the DITCH model (source: Meyer et al. 2016):
• Gender {Male, Female}: Agents choose partners of opposite gender.
• Age {18-35}: Preference based on a range with (default) mean 1.3 years and (default) standard deviation of 6.34 years. • Ethnicity (w, x, y, z): Agents have a preference for selecting partners of their own ethnicity or a different ethnicity. • Compatibility (score: 0-1): Agents prefer partners with a compatibility score that is closer to their own. • Education (levels: 0-4): Agents are assigned different levels of education, which influences their partner selection.
Environment Agents in the DITCH model are situated in a social space where they interact with each other and use their pre-existing social connections to search for potential partners. The choice of a potential partner depends upon an agent's aforementioned traits as well as other model parameters which we will discuss later on. Once a partnership is formed, agents then date each other to determine if the partner's education and ethnicity satisfy their requirements. They continue dating for a specified period, after which they reveal their compatibility scores to each other; if the scores are within their preferred range, they become marriage partners. Once a marriage link is formed, agents remain in the network without searching for any more potential partners. There is no divorce or break-up of marriages in the model. The model runs on a monthly scale, i.e., a time step/tick corresponds to 1 month in the model.
DITCH model parameters Following are the model parameters that setup the initial conditions at the start of a simulation run.
• ethproportion: Proportions of different ethnicities in the agent population.
• num-agents: Total number of agents. The population remains constant during simulation.
• love-radar (values: 1, 2, 3): Defines the search range by an agent for a potential partner in its network as the 'social distance' . • new-link-chance: Probability that two unconnected agents will form a new link during a simulation run. • mean-dating/sd-dating: Mean and standard deviation of an agent's dating period (in years). • sd-education-pref: An agent's tolerance for the difference in education level vis-à-vis its romantic partner.
---
Experimental setup
Initialization of ethnic proportions The DITCH model uses the UK census data of 2001 as a basis for the parameter ethproportion. In all of our simulation experiments reported in this paper, the following four cases were used; based on the four UK cities differentiated with respect to the proportion of different ethnicities (Meyer et We conducted experiments using the BehaviorSpace tool in NetLogo, which allows exploring a model's parameter space. The approach we used is also called "Parameters Tuning by repeated execution", i.e., varying one input parameter at a time while keeping the remaining parameters unchanged (update-threshold, second-chance-interval) (Remondino and Correndo 2006).
The DITCH model generates several outputs and a complete description is reported by its developers in Meyer et al. (2014;2016). In the analyses reported in this paper, we have focused on one output variable as the primary output: crossethnic, which is the percentage of cross-ethnic marriages in the system. The values taken for this variable were at the end of a simulation run (120 time steps; 10 years) and averaged over 10 replications per parameter configuration.
Given our resource constraints, we performed the experiments in two phases: In the first phase, we looked into the model's sensitivity to scale (in terms of the number of agents) and the extent to which agents search their potential partners in the network (i.e., love-radar). In the second phase, we explored the model's parameters specific to expanding agents' social network and those related to agents' compatibility with their potential partners.
Phase-I We first explored the model by varying two parameters with 10 repetitions for a total of 600 runs. All other parameters remained unchanged. Each simulation ran for 120 ticks (10 years).
Phase-II In the second phase, we kept the number of agents fixed to 3000 (see "Conclusions and outlook" section about the discussion on this choice). We then varied the other five model parameters for the four UK cities' ethnic configurations (see Table 1); for a total of 9720 runs. Each simulation ran for 120 ticks (10 years).
---
Simulation results and analyses
Here we present the results of the simulation experiments. For box plots and heat maps, we used R4 and its ggplot2 package. For regression/parameters importance analyses and for cluster analyses, we used R's caret and cluster packages respectively. For classification trees, we used Weka3 software.5
---
Results from simulation experiments (Phase-I)
In Phase-I, we varied the number of agents and the three values for the model parameter love-radar. For the rest of parameters, default values were used as reported in Meyer et al. (2016). The purpose for running experiments in Phase I was to gain a broader sense of the model's outcomes, in particular, the outcome of interest, which is the percentage of cross-ethnic marriages happening over a course of 10 years. Primarily, we were interested in testing the model's sensitivity to scale (the number of agents) and the availability of potential partners once the social distance (love-radar parameter) increases (Table 2).
To summarize the results, we generated the box and whisker plots and heat maps (Janert 2010; Seltman 2012; Tukey 1977), to explore variation in output across the two varying parameters and within each parameter configuration when repeated 10 times.
Figure 2 clearly indicates that the average percentage of cross-ethnic marriages across all the four cases (UK cities) is sensitive to the number of agents in the system. In particular, there is a sharp decrease in the average percentage of cross-ethnic marriages when the number of agents increases from 1000 to 2500, which is more evident in the case of Newham, where ethnic diversity was greatest in contrast to the case of Dover, where 98% of the agent population belonged to the White ethnic group. While sensitivity to scale is observed, the observed decline goes much slower and levels off as the number of agents reaches to 10,000. For a fixed size of agent population, the love-radar parameter in the DITCH model does influence the percentage of cross-ethnic marriages for all the four cases (UK cities). This is unsurprising as increasing the value of this parameter enables agents with a wider search space to find potential partners and thus the possibilities for finding a potential partner belonging to a different ethnic group increases as well. However, the relation with increasing the values of love-radar in the model is nonlinear for the output variable crossethnic for all the four cases (see Fig. 3). In Newham, which has the greatest ethnic diversity among all the four cities considered, the percentage of cross-ethnic marriages in-creases as the allowable social distance (value of the love-radar parameter) increases, whereas in case of Bradford and Dover, an increase in the love-radar from 1 to 2 results in an increase in the average cross-ethnic marriages but a further increase from 2 to 3 results other-wise. The heat map plot shown in Figure S1 in Additional file 1: Appendix further highlights this effect.
From an exploratory analysis of simulations from Phase-I, it is clear that the DITCH model is sensitive to the number of agents in the system. As the effect dampens when the agent population increases further on, we fix the number of agents to be 3000 for simulation experiments in Phase-II. In case of the love-radar, the observed nonlinear relation indicates that other model parameters that were kept fixed in Phase-I also contribute to the output. Thus, a further exploration and a deeper analysis of the four model parameters is presented next.
---
Results from simulation experiments (Phase-II)
In Phase-II, we fixed the agent population at 3000 and ran simulations across different values of the five other model parameters, as described in the previous section. Here we demonstrate the use of several predictive and data mining techniques that might be useful in exploring and analyzing outputs generated from agent-based models.
First, we estimate the 'importance' of parameters by building a predictive model from the simulated data Brownlee (2016). For instance, importance of parameters can be estimated (subject to the underlying assumptions) using a linear regression model. We used the caret package in R for this purpose. The method ranks attributes by importance with respect to a dependent variable, here crossethnic (the percentage of crossethnic marriages) as shown in Fig. 4 (left). As Fig. 4 (left) shows, the model parameters love-radar and the new-link-change were identified as the most important parameters while the parameter mean-dating was ranked last. Figure 4 (right) shows the RMSE (root mean square error) when identifying the predictive model's accuracy in the presence and absence of model parameters through the automated feature selection method. Again, love-radar and new-link-chance were found as most significant (as the top two independent variables). Having identified love-radar and new-link-chance as two most important parameters, we explore variation in the generated dataset for the four cases (UK cities) with respect to these two parameters as shown in the box plots in Fig. 5.
As Fig. 5 shows, increasing the value of love-radar parameter does result in increasing average cross-ethnic marriages in the DITCH model. Increasing chances of new links formation also contributes albeit less significantly. The variations observed in the box and whisker plots also suggest the role of other three parameters, which seem to play a role when the values of love-radar and new-link-chance are increased (see heat map in Figure S2 in Additional file 1: Appendix).
Evaluating partial rank correlation coefficients We further explored a subspace of the parameter space to identify the most admissible parameters by evaluating partial rank correlation coefficients (PRCC) for all output variables (Blower and Dowlatabadi 1994). The rationale behind calculating the PRCC is that for a particular output, not all input parameters may contribute equally. Thus, to identify the most relevant parameter(s), PRCC could be useful. One major advantage of identifying the top most relevant parameters based on the PRCC is that given a large parameter space, if only few input parameters have a significant contribution for a particular output, it reduces the dimensionality Fig. 4 Ranking of the five parameters as predictors of the output variable crossethnic. Left: the importance ranking of model parameters. Right: RMSE score against different models built using the automatic feature selection algorithm of parameter space significantly. For our analysis, we calculated the PRCCs for all output variables using a package in R called knitr. 6 Table 3 shows the top three contributing inputs for each output variable when the PRCC was estimated.
Following our proposed schema, we proceed with generating a classification and regressing tree using Weka's decision tree builder as shown in Fig. 6.
The decision tree shown in Fig. 6 was built using Weka's REPTree algorithm.7 It is a 'fast decision tree learner and builds a decision/regression tree using information gain/ variance reduction' (Hall et al. 2011). Since here we are predicting the cross-ethnic parameter, which is a continuous variable, the REPTree algorithm uses variance reduction to select the best node to split. We used the five varied parameters to build the tree shown in Fig. 6, in which the DITCH model parameters love-radar, sd-education-pref, mean-dating, new-link-chance, sd-dating were the predictors while the output parameter cross-ethnic was the target variable. We set the minNum (the minimum total weight of the instances in a leaf ) property of the classifier to 200 to avoid overfitting. The resulting tree had the following accuracy/error metrics on the test/unseen data.
As the constructed tree shows (Fig. 6), ethnic diversity (or the lack of ) in the agent population was the strongest determinant of cross-ethnic marriages. Once again, loveradar was found to be the second most important determinant, especially, in situations where some ethnic diversity existed. When the value of the love-radar was set to 1 (i.e., only immediate neighbors in the social network were sought), it alone determined Mean Absolute Error: 0.9582 Root Mean Squared Error: 1.2995 the percentage of cross-ethnic marriages; however, for higher values of the love-radar parameter (i.e., 2 and 3), the output was further influenced by new-link-chance and in other instances, the parameters related to agents' dating in the simulation.
K-Means clustering on all 13 DITCH output variables We now turn towards the K-means clustering algorithm to find clusters in the generated dataset. We performed the cluster analysis on the 13 output variables of the DITCH model that were recorded from our simulation experiments. We chose the data from Phase-II, which involved five varied parameters for each sample area (a UK city) with 9720 runs altogether. Our purpose of applying this technique was to group the output instances that were similar in nature into clusters. All output variables were first normalized before proceeding to the next step of finding the optimal number of clusters (k). We then followed the technique used by Edmonds et al. (2014), in which within group sum of squares were calculated against the number of clusters for multiple random initialized runs. The optimal value of clusters in the plot could then be identified as the point at which there is a bend or an elbow like curve. Figure 7 (left) suggests the optimal number of clusters to be around 3 or 4 where a bend is observed. The silhouette analysis 8 shown in Fig. 7 (right) also shows that the optimal value for k is around 3 or 4 in this case. Here the plot displays a measure of similarity between the instances in each cluster and thus provides a way to assess parameters like the number of optimal clusters (Rousseeuw 1987). The results from this analysis confirms that the optimal number of clusters should be around 4. Hence, we ran the K-means clustering algorithm for all the thirteen outputs; the centroids of the four K-means clusters are given in Table 3. The partitioning of data into the four clusters gives a good split across parameters explored.
As Table 3 shows, the goodness of fit is high (~ 87%) indicating that the clusters are distinct, with an almost equal number of instances across all the four clusters. The mean percentage of cross-ethnic marriages was highest in Cluster 2 (19.78%) and lowest in Cluster 4 (1.25%); while Clusters 1 and 3 were found to be closer in terms of the average cross-ethnic marriages. These are results we expect as they present quite an accurate picture of the population distribution of ethnicities in the four UK cities (Newham, Dover, Bradford and Birmingham). We can check the distribution of the input parameter eth-proportions across these four clusters and the resulting matrix in Table 4 shows that each region is quite accurately labeled in each cluster. The dominant ethnicity in which most of the cross-ethnic marriages occur like in Cluster 1 representing the sample area Birmingham has ethnicity-z (Black/Black British: Caribbean(B/BBC)), which is 5.64% of the total population showing the most cross-ethnic marriages, while in Cluster 2 ethnicity-y(Asian/Asian British: Indian A/ABI) which is 6.57% of the total population, in Cluster 3 ethnicity-x ((Asian/Asian British: Indian A/ABI)), which is 17.9%, and in Cluster 4 ethnicity-x (White: Other (WO)), which is 1.83% of total populations are representing the highest cross-ethnic marriages.
Figure 8 (top) shows the 2D representation of all the data points of the four clusters. As discussed earlier, Clusters 1 and 3 have some overlapping points while Clusters 2 and 4 were distinct and separate. Finally, Fig. 8 (below) shows the variability in terms of the average cross-ethnic marriages across the four clusters.
---
Conclusions and outlook
As agent-based models of social phenomenon become more complex, with many model parameters and endogenous processes, exploring and analyzing the generated data gets even more difficult. We need a whole suite of analyses to look into the data that such agent-based models generate, incorporating traditional or dynamic social network 8 https://stat.ethz.ch/R-manual/R-devel/library/cluster/html/silhouette.html. Fig. 7 Finding the optimal number of clusters for the K-means clustering algorithm. Left: using the within group sum of squares technique. Right: using the silhouette analysis analysis, spatio-temporal analysis, machine learning or more recent ones such as deep learning algorithms. There is a growing number of social simulation researchers who are employing different data mining and machine learning techniques to explore agentbased simulations.
The techniques discussed in this paper are by no means exhaustive and the exploration of useful analysis techniques for complex agent-based simulations is an active area of research. Lee et al. (2015), for example, examined multiple approaches in understanding ABM outputs including both statistical and visualization techniques. The authors proposed methods to determine a minimum sample size followed by an exploration model parameters using sensitivity analysis. Finally, the authors discussed focused on the transient dynamics by using spatio-temporal methods to gain insights on how the model evolves over a time period.
In this paper, we propose a simple step-by-step approach to combine three different analysis techniques. For illustration, we selected an existing evidence-driven agent-based model by Meyer et al. (2014Meyer et al. ( , 2016)), called the 'DITCH' model. As a starting point, we recommend the use of exploratory data analysis (EDA) techniques for analyzing agentbased models. EDA provide simple, yet an effective set of techniques to analyze relationship between a model's input and output variables. These techniques are useful to spot patterns and trends in a model's output across varying input parameter(s) and to gain insight into the distribution of data that is generated. Sensitivity analysis (SA) techniques follow the exploratory space and are useful, e.g., to rank input parameters in terms of their contribution towards a particular model output. SA techniques are not only useful in identifying those parameters but also quantify the variability of the effect these input
---
Authors' contributions HP, MA and SJA drafted the manuscript; SJA and MS designed the study; HP and MA generated the data; HP, MA, SJA and MS analyzed and interpreted the data. All authors read and approved the final manuscript.
---
Author details
---
Consent for publication
Not applicable.
---
Ethical approval and consent to participate
Not applicable.
---
Competing interests
The authors declare that they have no competing interests.
---
Additional file
Additional file 1. Additional figures and table. |
The Polish economy has been transformed for more than three decades by diverse crises. The beginning of the changes was brought by the political transformation, which modified the model of the functioning of the State in almost all areas. To some extent, its continuation was the accession to the European Union, which opened not only markets for sales, labor, but also new sources of technological and scientific development. Economic development did not protect the economy from the negative effects of change, which occurred in the form of rising public debt, unemployment rates and reduced spending on important public sectors. Sudden breakdowns, in turn, forced changes in social needs, as exemplified by the explosive demand for e-services during the covid-19 pandemic. The article analyzed selected indicators that allowed analysis of the degree of differentiation of socio-economic development in Poland against the background of crises. | Introduction
The beginning of the process of changing: economic, political and social in Poland was marked by the political transformation begun in 1989. The rejection of the socialist society model and centrally planned economy, and the introduction of democracy and a capitalist economy instead, was to be the beginning of economic growth. It also meant institutional reconstruction, privatization, transformation of society, opening up to new markets, new technologies and unlimited access to knowledge. However, the extent of the changes made and the high speed of implementation of new development directions with the liquidation of subsidies and state-owned enterprises has caused a number of negative effects. Among these were: the rise in unemployment, the decline in production in the early 1990s (Łazor, 2017, p. 285). Another economic shock was Poland's admission to European Union structures. In contrast, the impact of the global financial crisis has been felt by most countries around the world (Nazarczuk, 2012, p. 75), however, the Polish economy has proven to be quite resilient in this regard. In turn, the pandemic of the coronavirus covid-19 outbreak initiated a new era of uncertainty for the sustainable development of economies -not only for Poland, but for the whole world. The objective of this study is to review and analyze the factors determining the development of Poland's socio-economic situation after 1989, with a special focus on the situation in 2020-2022 which includes the time of the covid-19 pandemic.
The analysis included indicators illustrating the socio-economic situation, such as the public debt or unemployment rate, state spending on the basic areas of the national economy, and the digital services sector in terms of the pandemic period. The article was based on statistical sources, reports and literature.
---
Changes in Poland's socio-economic area after 2000
In 2000, a new stage of Poland's functioning in Europe symbolically began caused by its acceptance as a candidate country to the European Union. Accordingly, foreign policy aimed to strengthen its position in the international arena. On the other hand, internally, the country was struggling with many problems. The labor market in 2000 was characterized by a high -15% unemployment rate due to the demographic highs, low labor mobility, high labor costs and labor laws unfavorable to employers (Szulc, 2008, p. 21). High public debt (37.7%) led to the need of reducing budget spending (tab. 1), and its growth was related to the resultant of relatively high borrowing needs of the state budget, declining privatization income and changes in the zloty exchange rate (Połomka, Zalesko, 2015, pp. 166-177). In 2001, the unemployment rate increased to 17.5%, public debt and state budget expenditures increased.
The year 2004 was an important year for the Polish state due to Poland's accession to the European Union. It significantly influenced economic development, and the situation in the labor market also improved, through the opening of European labor markets and the inflow of foreign direct investment enabling the development of entrepreneurship in the country. However, the unemployment rate in both 2004 and 2005 remained high.
The effects of the 2008 global financial crisis reached Poland with some delay. During this period, a decline in business activity could be observed due to declining foreign demand, temporary outflow of foreign capital and depreciation of the national currency (Nazarczuk, 2013, p. 80). The negative effects of the crisis were felt only by certain industries, that is, mainly: energy, banking (to a lesser extent than in other countries), finance, as well as the construction, real estate and transportation sectors (Nazarczuk, 2013, p. 80). The year 2008-2009 also noted a 1.6% slowdown in GDP growth, a 3.6% increase in the unemployment rate and a rising public debt.
Another global crisis was caused by the pandemic outbreak of covid-19 brought with multidimensional consequences both economically and socially (Czech, et al., 2020, p. 27). There has been a decline in GDP, an increase in inflation, an increase in the unemployment rate and negative developments in the labor market. The introduction of support programs for the Polish economy in the form of anti-crisis shields or credit vacations directly affected the state of public finances (Czech, et al., 2020, p. 27). The covid-19 pandemic caused an increase in unemployment (to 5.4%) and public debt -54.2% of GDP.
Despite the constant deficit of public funds in almost every sector, in the years under review, state budget expenditures on higher education and science, public administration (2004 was the exception) and culture and national heritage protection gradually increased. This trend also continued in the area of social welfare, but in 2009 state budget expenditures for this purpose were decreasing (tab. 2). In 2000 and 2004, expenditures on public administration accounted for 4% of total expenditures and were realized at 96%. A similar situation applied to expenditures of all analyzed areas, whose share in total state expenditures did not change. In 2000, in the field of culture and national heritage protection, 0.41% of total expenditures were spent, and this was related, among other things, to the modernization of the Wroclaw Opera House, the Philharmonic Hall in Lodz, or the construction of the Theater and Philharmonic Hall in Lublin. In the case of higher education and science (5% of total expenditures, budgeted separately), material aid for students, teaching activities and the construction of the Jagiellonian University Biological Sciences Complex in Cracow or the expansion of the Adam Mickiewicz University campus in Poznań were financed. After Poland's accession to the European Union, expenditures on higher education and science increased, increasing the mobility of students and academics. Social assistance, which includes family, nursing and child care benefits, subsidies to the Labor Fund and the Veterans Fund, accounted for 8% of expenditures from the state budget. Social policy financed family benefits and contributions to social insurance, family, nursing and child care benefits, the operation of social welfare homes or a subsidy to the alimony fund. In the following year, 2005, expenditures on public administration amounted to 4% of total state expenditures and were lower than expected due to the lower cost of the elections held for the country's President and the Sejm and Senate. Expenditures on culture and national heritage protection were low, accounting for only 0.49% of total expenditures. In contrast, 4.6% in total expenditures was spent on higher education and science. Nearly 10% of total expenditures were expenditures in the area of social policy, the amount of which depended on the amount of payments of family benefits and allowances and lower expenditures on paying health insurance premiums for recipients of family benefits.
In 2008, expenditures on public administration accounted for 3.6% of total expenditures, while expenditures on culture and national heritage protection did not exceed 0.5% of total expenditures. The year 2008 was also a period of reform of the system of financing and operation of scientific units in order to improve the competitiveness of Polish science. At that time, the research infrastructure of universities was financed, as well as the tasks of the program for the development and maintenance of information and computer infrastructure of science and its digital resources for 2006-2009 or the program for the development of information infrastructure of science for 2007-2013. In the area of higher education and science (accounting for 5.3% of total expenditures), tasks related to the universality of education, material assistance for equalization of opportunities, raising the level and quality of education and support for international cooperation were implemented. Expenditures in the area of social policy accounted for 4.6% of total outlays and were related to financing, among other things, family benefits and allowances, care, psychological and living services, as well as the implementation of government programs such as state food aid. In 2009, expenditures on public administration accounted for 3% of total expenditures. Expenditures from the public budget for cul-ture and national heritage protection accounted for 0.5% of total expenditures and included activities such as replenishing museum collections and rebuilding library book collections, improving the standard of services provided to people with disabilities. Expenditures on higher education and science accounted for 5% of total expenditures in the period under review. 92% of expenditures in this regard were allocated to subsidizing the activities of higher education institutions, their teaching activities and increasing the availability of higher education for people in a difficult financial situation and people with disabilities. Within the amount for statutory activities of scientific units and own research of higher education institutions, maintenance of specialized scientific and research equipment was financed, as well as ministerial programs and projects in the development of information and information technology infrastructure of science, work continued on the creation of the Copernicus Science Center, and tasks were carried out within the scope of Operational Programs: Innovative Economy, Increasing the Competitiveness of Enterprises, Human Capital, and the Norwegian Financial Mechanism, the EEA Financial Mechanism. Expenditures on social policy accounted for 4% of total state expenditures and focused on financing active ways of assistance, e.g. increasing the availability of specialized services or social work.
Expenditures of the state budget in 2019 amounted to PLN 414.3 billion and were lower by PLN 2 billion than the amount planned in the Budget Law. Within the amount indicated, PLN 2.19 billion was allocated for the protection and popularization of heritage and national identity at home and abroad (1% of total expenditures), the remaining PLN 1.04 billion financed artistic, dissemination, promotion of culture and intercultural dialogue activities. Within the framework of this function, among other things, portals related to archives were developed and popularized, or library collections were digitized. 6.7% of state budget expenditures were for higher education and science. In this aspect, the goal of raising the level of scientific research results, the level of quality of education and increasing the level of practical application of research and development work was achieved. A significant part of budget expenditures was made in the framework of social policy -35% of total expenditures. The indicated amount financed, among other things, the extension of the Family 500+ program for the first child or Se-nior+. 7% of total expenditures were expenditures on public administration. In 2020, expenditures on public administration decreased (they accounted for 2.7% of total expenditures). Expenditures on culture and national heritage increased (they still did not exceed 1% of total expenditures). 7% of total expenditures were allocated to higher education and science. The pan-
The Socio-economic Situation During the Crises in Poland After 1989... demic situation resulted in the extension of the implementation of research projects that were scheduled for 2019-2020 for another year and the possibility of spending unspent funds still in 2021. In addition, the number of foreigners studying at Polish universities decreased significantly, and classes at universities were conducted remotely. During the period under review, the amount of expenditure on social policy increased, accounting for 43% of total expenditures. Within the framework of this function, the following were financed, in a larger number than expected: upbringing benefits, family benefits or the "Maluch" program. In 2021, expenditures on public administration accounted for 3% of total expenditures. Expenditures on culture and heritage protection saw a slight increase (they accounted for 0.8% of total expenditures) related, among other things, to the transfer of activities to the Internet or adjustment to the sanitary regime. Expenditures on higher education and science accounted for 6% of total expenditures, among which we can highlight the financing of the NAWA project and the transfer of additional funds for competitions within the framework of the Intelligent Development Operational Program. Social policy expenditures accounted for 30% of total expenditures. The decrease in expenditures in this area was related, among other things, to the decrease in the number of people receiving non-cash assistance, the resignation of seniors from participation in the Senior+ and Active+ programs, or changes in the income situation of families with children.
---
Selected socio-economic aspects of the covid-19 pandemic
Although we have noticed rapid progress in the digitalization and use of modern technology in recent years, it has been further accelerated by the covid-19 pandemic. By 2020, nearly 60% of the world's population (more than 4.6 billion people) had access to the network, and 4.3 billion were using mobile Internet (Matera, Skodlarski, 2021, p. 371). Pandemic restrictions have forced the transfer of education, work, entertainment and many other aspects of life to the virtual space. Over the past few years, the number of people using the Internet has increased noticeably, from 58.8% in 2010 to more than 85% in 2021 (tab. 3). The increase was caused by a change in the way people communicate and maintain social contacts, as well as perform personal and professional duties. As a result of the pandemic, the percentage of video conferencing users increased (from 20% in 2010 to 56.4% in 2021), as well as Internet users who regularly use social media (28% in 2010 and 56.8% in 2021). Occupational activity was correlated with Internet use (it was used by 94% of those working and 55% of those not working). Among the employed, relatively the fewest Internet users were in the group of farmers, while among managers and specialists, technicians and middle staff, administrative and office workers, service workers and private entrepreneurs, all or almost all of them use the Internet (CBOS, 2022, p. 7).
Particular growth has occurred in the e-commerce domain. Nearly 40% of the public in the 17-74 age group searched for information about goods and services in 2010, while by 2021 it was already 65.6%. In 2010, 20.2% of users aged 16-74 made purchases of goods and services online, with twice as many people in urban areas as in rural areas (tab. 2). The number of users did not increase significantly in 2015, but in 2021 compared to 2010 -it was more than double. Significant changes took place in rural areas. There, the increase in the level of use of online shopping more than tripled. According to the CBOS survey, in 2022 nearly two-thirds of Poles (64%, or 84% of Internet users) shopped online, and more than one-
The Socio-economic Situation During the Crises in Poland After 1989... third of all adults (36%, or 46% of Internet users) made at least a single sale online (CBOS, 2022, p. 10). The number of people using the Internet for personal use has grown steadily. While the change in 2015 compared to 2010 can be described as quite evolutionary -the percentage of total Internet users rose from 58.8% to 68%. In 2021, this figure rose to 85.4%. During this period, the gap between urban and rural residents narrowed noticeably -it was 8.6% in favor of urban residents (correspondingly, in 2010 -17% fewer rural residents than urban residents used the Internet, and in 2015 -12%).
According to the CBOS survey, as of 2021, the largest increase in user groups was among those aged 45-54 and over 75. Compared to 2021, in 2022 their number increased by 10% and 9%, respectively (CBOS, 2022, p. 6).
One of the most valuable benefits of using the Internet was the ability to administer one's finances remotely -by using e-banking services. The number of e-banking users, which in 2010 in Poland amounted to slightly more than ¼ of the population (25.3%), rose to 31.2% in 2015 as a result of the development of these services successively introduced by banks. The crisis caused by the covid-19 pandemic caused the number of e-banking customers to exceed half of the population (52.2%). Online government services were used by 47.5% of Poles in 2021 (up from 28.1% in 2010 and 26.6% in 2015).
As recently as 2010, about ¾ of all Polish enterprises could demonstrate access to broadband internet and a website. In 2015, nearly 92% of enterprises were connected to broadband internet, but only 7.9% of them had a connection speed of at least 100 mbps. Their own websites were unchanged from 2010 -65.5% of businesses operating in Poland, social media profiles were maintained by more than 22% of companies at the time, and only slightly more than 7% used cloud services.
Even before the pandemic, micro, small and medium-sized enterprises ("MSMEs") were raising the value of using information and communication technologies (ICT). This was confirmed by the results of a survey conducted by the Polish Economic Institute (PIE) in the last quarter of 2019. At that time, entrepreneurs appreciated the importance of using ICT, pointing to the following features: more efficient communication with suppliers and customers; improved brand awareness, corporate image and customer relations; increased competitiveness of the company; and reduced employment, which is a direct result of the automation of company processes. According to the 2019 PIE survey, nearly one-quarter of the companies surveyed said their enterprises had increased their use of modern technology compared to 2018. The group was dominated by large enterprises in the information and communication section. The use of information and communication technologies was seen by the respondents as a competitive advantage and even a condition of the company's operation (Dębkowska et al., 2020, p. 6).
The PIE report indicates that in 2019. 47% of Polish enterprises highly rated their degree of use of modern technologies in production or services, 54% rated their use of modern technologies in communicating with their customers, 48% rated their use of multi-channel sales of their products or services provided, and 73% did not invest in modern technologies
The Socio-economic Situation During the Crises in Poland After 1989... (Dębkowska et al., 2020, p. 5). Business entities during the pandemic sought to support their sales and marketing activities with e-commerce, and to transform customer service to remote channels. Examples of such changes in business models could be seen, for example, in entities in the automotive industry (e.g., dealerships or dealers of car brands) and business-related leasing, insurance companies or banks with supporting financial services on offer. From these partnerships were born already in the first months of the pandemic sales platforms, where one could choose a car and ask the commissioner/dealer to bring the chosen vehicle to one's home for a test drive, and then after making a choice -be guided through the process of obtaining financing and insurance in a completely remote manner. Entities that failed to implement online sales and customer service for various reasons often lost the battle for survival. Many industries were affected -restaurants or entertainment in the broadest sense. However, here, too, there were entrepreneurs who switched to customer service on social media platforms and remote delivery (the press reported on neighborhood grocery stores with door-to-door delivery even).
According to the PIE report, 91% of companies used at least one modern technology during the pandemic, 70% used modern forms of communication with customers, and 10% of large companies implemented systems to manage remote work during the pandemic (without having used them before) (PIE, 2020, p. 5). The growth of ICT usage in Polish enterprises can be clearly seen in the 2021 statistics. They show increases in the use of broadband internet compared to 2015 by as much as 6.5% (from 92% to 98.5%), access to a connection with a speed of at least 100 mbps increased by more than 40% (from 7.9 to 48.4%. Our own website was owned by 71.4% of companies, an increase of almost 6%, the number of companies with social media profiles doubled (from 22.2% to 45.6%), which for some MSMEs companies could be an alternative to having a website. Meanwhile, the use of cloud computing by Polish companies more than quadrupled (from 7.3% to 28.7%). Just the so-called "cloud" (PIE, 2020, p. 42) which facilitates file sharing or simultaneous work on a document open at the same time on multiple devices, became an indispensable technology during the covid-19 pandemic. The benefits it brought to day-to-day work led to an intensification of project implementations, which before the pandemic would have reached only the pilot phase in MSMEs.
While the market for enterprises and private users of virtual space has seen rapid changes compared to previous years, the situation for cultural institutions has been more difficult. On this front, there have been many barriers related to less adaptability to rapid change. A relatively new challenge in increasing the availability of knowledge was the digitalization of library collections. This was due not only to the growing needs of the public for easier and wider access to books, journals and other publications held in library repositories and lending libraries, but also to adapt to operating in a period of tighter sanitation regime.
The first analysis of the condition of digitization of libraries in Poland was carried out in 2002, initiated by the National Library of Poland (Potrzebnicka, 2005, 66-77). Among the 55 libraries surveyed at the time, 25 had begun digitizing, and 14 were forming their own laboratories. After nearly two decades, this situation has changed significantly, and a particular evolution can be seen in the last two years (2020 and 2021). The number of objects digitized by public libraries in 2021 increased by 16.5% compared to 2020 (from 2028.3 thousand objects to 2362.2 thousand). Due to the lack of reader access to library buildings, new services were introduced or existing ones were expanded. Access to online resources, the use of e-media and e-books were promoted. In March and April 2020, compared to the previous year, statistics on the use of digital platforms related to European library services doubled (Sójkowska, 2020, p. 3). Over the past few years, there has been clear progress in the dissemination of digital offerings of libraries, especially in the case of booking and ordering library materials (tab. 5). Compared to 2016, the number of libraries offering these services has increased by almost a third in 2022. An important element, significantly improving the use of resources, were on-line catalogs. In 2016, more than half of libraries had them (63.1%). However, in 2022 it was already more than 80%. Having an account on social networks was also an interesting activity. While in 2016 more than a third of Polish libraries had them (36.8%), in 2020 this number exceeded half (51.4%). In 2021, it was also reported that 16.2% of libraries allow remote registration of new readers. Interestingly, the following year, their number increased to only 17.3%.
---
Summary
Each of the considered crises has significantly affected indicators related to economic development. The opening of the market as a result first of the political transformation period and later of Poland's accession to the European Union enabled economic and technological development. This was associated not only with an infusion of foreign investment and new sources of funds, but also with increasing spending from the state budget, especially on higher education and science. However, it has not eliminated problems such as rising unemployment, expanding public debt and increasing social welfare expenditures. Although the 2008 crisis did not have a significant impact on Poland's economic situation, spending on public administration was reduced during this period. A similar decrease in this regard also took place in 2020. Spending on higher education and social assistance, however, increased. Expenditures on culture and national heritage protection in the period under review did not exceed 1% of total budget expenditures.
The permanent effects of the pandemic on the economy include: accelerated digital transformation or, for example, remote work, which was a rather rare benefit before the pandemic, while during the pandemic it began to be used for all types of work that could be performed by a worker at home. As a result of the pandemic crisis, digitization in Polish companies has increased more rapidly, especially in the financial, transportation, education and health sectors. The use of apps, automatons and robots in production and commerce, bots in communications, e-banking and e-learning and teleconferencing platforms have increased. Work on modernizing the logistics industry has also accelerated, driven in part by the explosive growth of e-commerce. The demand for online activities has also affected cultural institutions, including libraries, which have greatly accelerated the implementation of projects to digitize services and resources. |
Background: Child maltreatment is a prevalent and notable problem in rural China, and the prevalence and severity of depression in rural areas are higher than the national norm. Several studies have found that loneliness and coping skills respectively mediated the relationship between child maltreatment and depression. However, few studies have examined the roles of loneliness and coping skills in child maltreatment and depression based on gender differences. Methods: All participants were from rural communities aged more than 18 years in Shandong province, and 879 valid samples (female:63.4%) ranging in age from 18 to 91 years old were analyzed. The Childhood Trauma Questionnaire-Short Form (CTQ-SF), the Center for Epidemiologic Studies-Depression (CES-D), the Simple Coping Style Questionnaire (SCSQ), and the Emotional and Social Loneliness Scale (ESLS) were used to evaluate child maltreatment, depression, coping skills and loneliness. Results: Child maltreatment was more common and severe in males than females (F = 3.99; p < 0.05). Loneliness and coping skills partially mediated the relationship between child maltreatment and depression in males, but loneliness fully mediated the relationship between child maltreatment and depression in females.In this study, males were more likely to experience child maltreatment. Child maltreatment and depression were correlated. We also found a mediating role of loneliness and coping skills for males and a mediating role of loneliness in females. | Background
Child maltreatment is a public health problem worldwide [1,2], and can influence victims' mental health from the period of adolescence to adulthood to lifetime. There are five types of child maltreatment: physical abuse, emotional abuse, sexual abuse, physical neglect, and emotional neglect. In rural China, the estimated prevalence of any child maltreatment has remained at 66.3% [3]. Boys were more likely to be physically abused than girls, and girls were more likely to be neglected because of the parental gender-linked expectations for children [4]. A school-based study reported that 43.09%, 41.65%, and 42.18% of rural children experienced physical abuse, emotional abuse, and neglect, respectively [5]. Of the boys, 80.5% suffered abuse or neglect, while 75.1% girls experienced any abuse and neglect in regions with a moderate level of economic growth [6]. A systematic review showed that 26.6%, 19.6%, and 26.0% of children suffered physical abuse, emotional abuse, and neglect [7]. We discovered that the prevalence of child maltreatment in rural China was higher than the national norm in China. Consequently, child maltreatment in rural areas is a prevalent and notable issue.
Depression is also a major public health issue and a common disease among adolescents and geriatric patients in China [8,9], which influences people's physical and mental health and increases the risks of suicide and morbidity. A meta-analysis reported that 22.7% of geriatric patients had depressive symptoms, women were more likely to develop depressive symptoms than men, and rural populations developed depressive symptoms more easily than urban populations [10]. Among the rural elderly in China, 54.7% developed depressive symptoms [11]. The prevalence and severity of depression in rural areas were higher than average. Depressive disorders ranked 19 th up to 13 th as the leading cause of the global burden of disease (GBD) from 1990 to 2019 and ranked 4 th and 6 th for 10-24 years and 25-49 years, respectively [12].
Child maltreatment is a potent risk factor for internalizing problems, such as depression, anxiety and loneliness [13]; sufficient studies have shown that females are vulnerable to depression and experience child maltreatment compared with males who experience child maltreatment in China [3,14]. Child maltreatment is also a risk factor for developing maladaptive coping skills [15]; people who do not have adaptive coping skills may find it difficult to confront stressful events and regulate emotional problems and are prone to depression. Previous studies have indicated that coping skill was associated with depression symptoms [16,17]. Thus, coping skills are a positive factor in the prevention of depression.
Studies have found that child maltreatment was associated with loneliness, coping skills and depression [18,19]. Child maltreatment was positively correlated with loneliness and depression and negatively correlated with coping skill. Several studies examined that loneliness and coping skills mediated the relationship between child maltreatment and depression [20,21]. However, few studies have examined the roles of loneliness and coping skills in child maltreatment and depression based on gender differences. Exclusive research investigated the mediating effects of different coping styles on the relationship between childhood maltreatment and depressive symptoms among Chinese male and female undergraduates [22], but coping styles consisting of six dimensions differed from coping skills in this study. Other studies have reported the mediating effect of coping skills in young male or female adults [21,23].
This study aimed to examine the roles of loneliness and coping skills in child maltreatment and depression among rural males and females in China to understand better the cognitive-affective mechanisms underlying child maltreatment and depression and improve the prevention and intervention processes for depression.
---
Child maltreatment and depression
Maltreatment can negatively impact development by altering the developing neural system or disrupting other factors. Furthermore, maltreatment may exacerbate or express neuropsychiatric syndromes in individuals with genetic vulnerabilities (e.g., major depression) [24]. People who are exposed to maltreatment in childhood are at risk of having a range of poor mental outcomes, such as major depressive disorder (MDD), posttraumatic stress disorder (PTSD), and substance abuse [25][26][27]. Specifically, numerous studies have suggested that child maltreatment is associated with depression during childhood, adulthood, and the geriatric period [28][29][30]. A retrospective study indicated that patients with depression experienced more severe childhood maltreatment than healthy controls [31].
Other studies have shown that the number of child maltreatment is associated with increased depressive symptoms [32,33]. Studies found that many elderly people in Brazil who reported cumulative maltreatment experiences were more likely to suffer from depression, but there was no impact on the severity of depression [34]. However, it is not clear whether the influences of child maltreatment on depression is similar or different according to sex. Some studies have indicated that women who experienced child maltreatment were more likely to experience depression than men [14,35]. Few studies have illustrated the reverse results that the men who experienced maltreatment were likely to have depression [36]. Moreover, several studies found a similar effect of child maltreatment on depression for males and females [37,38].
---
Child maltreatment and loneliness
Loneliness has been characterized as a feeling of social isolation and separateness [39,40]. People isolated from society find it difficult to build and maintain social connections and acquire social support; thus, they become lonely. The feeling of loneliness is more intense from middle age onward. One study estimated that twentyeight percent of older Chinese people reported feeling lonely, and approximately seven percent reported often or always feeling lonely [41]. A growing body of literature indicates that loneliness is associated with physical and mental health and cognitive functions, such as depressive symptoms [42], mortality [41], systolic blood pressure [43], and impaired cognition (Alzheimer's disease) [43].
Studies have shown that childhood trauma is positively correlated with loneliness [44,45], meaning that individuals who have experienced childhood trauma are easier to lonely than those who have not. Most importantly, previous studies have pointed out a relationship between child maltreatment and loneliness [13,46].
Converging evidence provides empirical support for other studies. One study reported that childhood maltreatment is non-negligible for loneliness in adulthood [47]. Women who had been maltreated were lonelier and had a more negative network orientation than nonabused women because they tended to isolate themselves socially [48]. Findings from several studies indicate that children exposed to abuse also experience loneliness and social isolation in their lives, preventing the development of developing adequate and efficient social skills [49,50].
---
Child maltreatment and coping skill
Coping skills represented the way in which individuals deal with stressful or negative experiences [51]. In maltreating families, maltreating parents often conceal emotional expressions, interact in hostile and aggressive ways, and rely on punitive interaction styles. Based on this, due to high levels of unpredictability in parent-child interactions and the home generally, maltreated children fail to model appropriate coping skills when they encounter with stress and try to control what happens to them, leading to a feeling of helplessness [18].
Previous studies have reported that child maltreatment is associated with coping skills [52,53]; people who experienced maltreatment did not cope with stress or regulate emotion and had low coping skills. Child maltreatment plays a major role in adolescent well-being and coping [54].
---
Loneliness as a mediator
People who have experienced child maltreatment are likely to withdraw from society; they are reluctant to contact others due to feelings of inferiority and distress, so they are unable to receive social supports or concerns and eventually become lonely. The association between loneliness and depressive symptoms appears to be stable across ages [42], A lonely person is more likely to be depressed than a normal person.
Studies have investigated the role of loneliness as a mediator in the relationship between childhood trauma and adult psychopathology and indicated both direct and mediational effects of social resources on adult depression symptoms in women with a history of child multi-type maltreatment [32]. Loneliness mediates the relationship between children abuse and six adult psychiatric disorders: depression, generalized anxiety disorder (GAD), mixed anxiety and depression (MAD), phobia, post-traumatic stress disorder (PTSD), and psychosis [20].
---
Coping skills as a mediator
On the one hand, child maltreatment causes poor coping skills, making it harder for maltreated children to confront and deal with stress. Meanwhile, the relationship between stress and major depression has been ensured [55], and stress increases the likelihood of depression. However, a high level of coping skills mediates or decreases the impact of maltreatment on depression.
Compared to non-maltreated children, child maltreatment is associated with a decrease in the usage of coping skills, and low coping skills may exacerbate depression. Studies have shown that coping skills mediated the relationship between child maltreatment and internalizing and externalizing behaviors [15]. Other studies have also found that coping skills mediate and moderate the impact of maltreatment on depressive symptoms [23].
---
Current study
This study aims to answer two research questions. First, we examined the role of loneliness and coping skills in the relationship between child maltreatment and depression. Second, we tested whether the mediation models are differed by gender.
---
Methods
---
Participants
This is a cross-sectional study was conducted in Shandong province, China. All the participants were from one county (Taierzhuang). Shandong Province is located in the east of China, with economic prosperity in both industry and agriculture [56]. Taierzhuang County is located in the south of Shandong, and there were 230 thousands rural people [57]. We used the random cluster sampling method, and all five towns in Taierzhuang County were selected to conduct the interviews. In each town, one village was randomly selected. People aged more than 18 years in the selected village were asked to participant in this study. In total, 879 participants were interviewed. The response rate was 94.9% (879/926).
---
Data collection
The data for this study were collected in November 2019. All the interviewers were trained postgraduate students who understood the research and questionnaires. The participants were voluntary and provided written informed consent. For illiterate and semi-illiterate participants, written informed consent was filled by their legal guardians. The interviewers and subjects had face-to-face interviews, and the interviewers filled the questionnaires according to the subjects' responses. After the survey was completed, at least two trained students checked the contents of the questionnaires, and questionnaires with the missing or unclear data were revisited and refilled.
---
Measures
---
Child maltreatment
The Childhood Trauma Questionnaire-Short Form (CTQ-SF; [58]) is a 28-item self-report scale rated on a five-point Likert scale ranging from None = 1 to Always = 5. A sample item was as follows "I thought that my parents wished I was never born". The final score was the sum of all item scores, with higher scores reflecting more frequent and severe of child maltreatment were experiences before the age of 16. The CTQ-SF includes the coherence and viability of the constructs [58]. In this study, internal consistency α = 0.87 for both males and females.
---
Depression
The Center for Epidemiologic Studies-Depression (CES-D; [59]) is a brief 20-item self-report measure (e.g., "I felt your life is failing" and "I felt lonely") rated on a fourpoint Likert from 0 = Within 1 d to 3 = Five to Seven days. The final score was the sum of all items' scores, with higher scores representing higher frequencies of depression during the past week. A scale is a valuable tool for studying the relationships between depression and other variables [59]. In this study, internal consistency was α = 0.90 for males and females.
---
Coping skill
The Simple Coping Style Questionnaire (SCSQ; [60]) is a 20-item scale rated on a four-point Likert scale ranging from Untaken = 0 to Often = 3 in the context of Chinese culture. The scale reflected that you possibly took actions or exhibited attitudes when you suffered setbacks and encountered difficulties. Items 1-12 belong to positive coping (e.g., "Tried to see the bright side of things"), and 13-20 belong to negative coping (e.g., "Tried to forget the whole thing") [61]. The final score is the sum of all item scores, with the higher scores representing greater coping skills. The internal consistency coefficient of the scale was 0.90 [60]. In this study, internal consistency was α = 0.61 for males and females.
---
Loneliness
The Emotional and Social Loneliness Scale (ESLS; Wittenberg, 1986, cited in PR Shaver and KA Brennan [62]) is a 10-item rating on a five-point Likert ranging from 1 (never) to 5 (very often). A sample item is as follows: I haven't special love relationship". There are five items which are reverse scores (e.g., "Someone could accompany me". The final score is the sum of all items' scores, with higher scores reflecting a higher level of loneliness in the past year. For this study, internal consistency was α = 0.75 for both males and females.
---
Social-demographic variables
Gender was measured as male (1) and female (2). The participants' ages were calculated using their date of birth and divided into three groups, 18-44 years old belong to young people (1), 45-64 years old belonging to middleaged people (2), and above 65 years old belonging to old people (3). Ethnicity was assessed using the Hans (1) and others (2). Marital status was assessed as unmarried (1) and married (2). Education was assessed by illiteracy and semi-illiteracy (1), primary school (2) and middle school and above (3). Only child was assessed as yes (1) and no (2). Living alone was evaluated using yes (1) or no (2). Offspring were evaluated by yes (1) and no (2). Income level was assessed as higher (1), average (2) and lower (3).
---
Statistical methods
Statistical analyses were performed by using SPSS, version 23.0. Descriptive analyses were examined as means and standard deviations for continuous variables and numbers and percentages for categorical variables. Oneway ANOVA or Chi-square test was conducted to assess mean differences for variables across the gender. Bi-correlation analysis was conducted with independent variables, mediators, and outcome variables. Linear regression was used to build the relationship between child maltreatment, loneliness, coping skills and depression while controlling for sociodemographic variables. Categorical variables are transformed into dummy variables. We conducted separate analyses of the data split by sex. All significance tests were two-tailed and a p-value of 0.05 or lower would be considered statistically significant.
---
Results
---
Descriptive statistic and Bivariate correlation
This study investigated 879 participants from rural communities in Shandong Province, China. The sample characteristics and descriptive analyses are revealed in Table 1. Most males (43.2%) were older, while most females (42.9%) were middle-aged. The percentage of married women was higher than that of married men (98.4% vs 96.3%, p < 0.05). Of the women, 44.9% were illiterate and semi-illiterate, and 45.7% males had a middle school education and above. Most participants weren't only child, had offspring, and lived with at least one person, with percentages of 96.2%, 96.9%, and 89.3%, respectively. The mean CM score of the participants was 42.10 (SD = 0.90). Females were more depressed and lonelier than males, but there was not significant difference (F = 3.261, p > 0.05; F = 3.25, p > 0.05). More detailed information is provided in Table 1.
Bivariate correlation analysis revealed that CM, loneliness, coping skills and depression had mutually have significant associations (p < 0.001), as shown in Table 2. Greater severity of CM was associated with fewer coping skills (r = -0.214, p < 0.001) and with more depression (r = 0.330, p < 0.001), and more loneliness (r = 0.308, p < 0.001) for males. More severe child maltreatment was associated with fewer coping skills (r = -0.251, p < 0.001) and with more depression (r = 0.314, p < 0.001) and loneliness (r = 0.454, p < 0.001) for females.
---
Mediation analysis for males
A mediation model was used to examine the mediating role of loneliness and coping skills on the relationship between CM and depression. Table 3 demonstrates that males who experienced child maltreatment reported higher levels of loneliness (path a: b = 0.190, p < 0.001) and lower levels of coping skills (path a: b = -0.127, p < 0.01). The effects of loneliness and coping skills on depression were significant (path b: b = 0.441, p < 0.001; b = -0.167, p < 0.01). A total effect of child maltreatment on depression was observed (c = 0.238, p < 0.001). After controlling for mediative variables, the link between maltreatment and depression remained significant (direct effect c': b = 0.133, p < 0.01). Tests of the indirect effect of loneliness (ab = 0.084) and coping skills (ab = 0.021) were significant. Figure 1 presents the mediating role of loneliness and coping skills in child maltreatment and depression among males. Loneliness and coping skills partially mediated the relationship between child maltreatment and depression among males.
---
Mediation analysis for females
Table 3 demonstrates that female participants who experienced child maltreatment reported higher levels of loneliness (path a: b = 0.265, p < 0.001) and lower levels of coping skills (path a: b = -0.113, p < 0.001).
The effect of loneliness on depression was significant (path b: b = 0.666, p < 0.001), but the effect of coping skills on depression was not significant (path b: b = -0.095, p = 0.072). A total effect of child maltreatment on depression was observed (c = 0.223, p < 0.001). However, after controlling for mediative variables, the link between maltreatment and depression was not significant (c': b = 0.036, p = 0.327). The indirect effects of loneliness (ab = 0.176) and coping skills (ab = 0.011) were significant. Figure 2 presents the mediating role of loneliness in child maltreatment and depression in females. Loneliness fully mediates the relationship between child maltreatment and depression in females. However, coping skills did not mediate the relationship between child maltreatment and depression among females.
---
Discussion
This study used a population-based sample of rural participants and validated scales to examine the experiences of child maltreatment, loneliness, coping skills and depression. More importantly, we aimed to test the association between child maltreatment and depression, extend the roles of loneliness and coping skills in the relationship between child maltreatment and depression, and compare whether the mediation model for males differed from females. In this study, males were more likely to be maltreated than females before 16 years of age, while the participants were from the rural regions. One possible reason may be that rural parents paid attention to boys, and they pinned greater hopes for boys, so boys were maltreated more than girls. The finding that nearly half of the females did not accept education also proved this phenomenon. As mentioned in the introduction section, we found an association between child maltreatment and depression in rural Chinese men and women.
This study supports the role of loneliness and coping skills in the relationship between child maltreatment and depression, which is consistent with previous research [20,23]. The mediation models of loneliness and coping skills for men and women had similarities and differences. A previous study also indicated that sex differences mediated coping styles such as self-blame, fantasizing, problem avoidance, and rationalization on the relationship between childhood maltreatment and depressive symptoms [22]. The results proved that child maltreatment was directly predictive of increased loneliness and decreased coping skills; loneliness and low coping skills would worsen depressive symptoms. Loneliness and coping skills partially mediated the relationship between child maltreatment and depression for males. Loneliness fully mediated the relationship between child maltreatment and depression for females, but coping skills did not. For males, child maltreatment directly influenced depression, loneliness, and coping skills, while maltreatment indirectly caused depression because of loneliness and poor coping skill. Child maltreatment caused loneliness, thereby indirectly influencing depression in females.
Previous studies found that females were more prone to depression [10], and this study also found the average depressive level of females was higher than that of males, but there was no significant evidence to investigate this point (F = 3.261, p > 0.05), possibly because research methods, measures and participants were different from other studies. In this rural population, coping skills as a protective factor, and higher coping skills may decrease the influence of child maltreatment on depression. Furthermore, loneliness is a positive factor in people's mental health. The findings regarding gender differences revealed loneliness played a more important role in the influence of child maltreatment on depression for females; depressive symptoms caused by child maltreatment were fully mediated and led by loneliness. And females were more likely to be influenced deeply by child maltreatment than males.
It is noteworthy that women were shown to be more vulnerable to loneliness, and society should provide more support and care for them, prevent and intervene in loneliness, and improve their coping skills. To summarize, we recommend that relevant departments promote education to increase individual quality, decrease the incidence of child maltreatment, and provide more social supports and assistance, consequently improving mental health.
This study has some limitations that must be considered. First, this was a cross-sectional study, and we measured independent, mediative, and dependent factors simultaneously. Hence, the child maltreatment and depression sequence were ambiguous, and perhaps child maltreatment influenced depression. It is also possible that depressive children were likely to be maltreated. Second, participants may have forgotten the maltreatment experience because of age, and some participants were shame to reflect on the experience. The study results were lower than expected; however, the outcomes were statistically significant. Third, we did not explore the different types of child maltreatment or the correlation between loneliness and coping skills, lonely people were less likely to develop the skills to cope with difficulties.
---
Conclusions
In conclusion, males were more likely to experience child maltreatment than females. We also found an association between maltreatment and depression, the mediation models are different based on gender, the mediative role of loneliness and coping skills for males, and the mediative role of loneliness for females. The effects of loneliness on females who experienced maltreatment were greater than those of males who experienced maltreatment.
---
Availability of data and materials
Data are however available from the authors upon reasonable request and with permission of IRB of Shandong University School of public health, which were used under license for the current study, and so are not publicly available.
---
Authors' contributions
MQW analyzed the data and wrote the draft, XMX participated the data collection, and LS designed the study and commented on the draft of this manuscript. All authors read and approved the final manuscript.
---
Funding
The National Natural Science Foundation of China (71974114) funded this project but had no role in study design, data collection, data analyses, data interpretation, or the writing of the paper.
---
Declarations Ethics approval and consent to participate
All research protocols were approved by the Institutional Review Board of Shandong University School of Public health. All methods were carried out in accordance with relevant regulations and guidelines. Informed consent was obtained from all participants of the study. Informed consent was obtained from their legal guardians of the illiterate and semi-illiterate participants.
---
Consent for publication
Not applicable.
---
Competing interests
LS is the member of Editorial Board for BMC Psychiatry, and the authors declared that they have no other competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
nature-based tourism in a post-COVID era are discussed. This paper highlights how health and tourism geographers can work collaboratively to recognize, protect, and sustain the therapeutic elements of natural landscapes, recognized as a cultural ecosystem service. In so doing, such collaborations can positively influence sustainable nature-based tourism development and consumption through proper and appropriate planning and development of such tourism destinations. | Introduction
As challenging as COVID-19 has and, in many ways, continues to be for the world's population, it has taught us a few things as well. For instance, we have learned that: work can take place anywhere, the benefits of technology and telemedicine are indisputable, self-care is not self-indulgent, and nature matters. With the challenges and hardships associated with stay-at-home orders, social isolation, and the stress of the pandemic itself, many have found a new appreciation for nature and are consequently more motivated to preserve it. By most measures, the planet did well during COVID, with limited air travel and consumption, movement of people, and limited interruption Abstract One of the few silver linings in the COVID pandemic has been a new appreciation for, interest in, and engagement with nature. As countries open, and travel becomes accessible again, there is an opportunity to reimagine sustainable nature-based tourism from a therapeutic landscape lens. Framed within the therapeutic landscape concept, this paper provides an autoethnographic account of a visitor's experience of three different natural landscapes in Iceland shortly after the country's fourth wave of the pandemic. It adds to the understanding of the healing effects of the multi-colored natural landscapes of Iceland. The natural landscapes of interest herein include: the southern part of the Westfjörd peninsula, Jökulsárlón glacial lagoon, and the Central Highlands. In totality, the natural, built and symbolic environments worked in synchronicity to produce three thematic results: restoration, awe and concern, all which provided reduced stress, renewed attention, as well as enhanced physical and psycho-social benefits for the autoethnographic visiting researcher. Implications of these restorative outcomes for sustainable to wildlife habitat, for example (Gunton et al., 2021;AARP, 2021). The renewed interest in natural areas is likely to create a new boom in nature-based tourism worldwide. COVID has brought a time of pause for tourism, providing the time and attention to better improve the sustainable planning, development, and management of nature-based tourism in a post-COVID era.
In addition to the impacts caused by the pandemic, several other forces are at play in the quest to protect nature; these include respite from an increasingly urbanizing and technologically driven world, and worry about the loss of nature due to environmental and climate change. Nature provides an opportunity to slow down, offering a time-out from the demands of technology and city life. Buckley (2022) notes that research evidence indicates that both nature and adventure tourism contribute to positive mental health given that enjoyment promotes wellbeing. Climate change is another force at work. In addition to those motivated by, 'last chance tourism' (e.g. Abrahams & Hoogendoorn, 2021;Salim & Ravanel, 2020), tourists are increasingly seeking opportunities to breath fresh air in a world facing a growing number of wildfires every year (National Interagency Fire Centre, 2021). Seeking natural environments through naturebased tourism has been gradually rising in recent decades (e.g. Ólafsdóttir et al., 2021;Saeþórsdóttir, 2010) and is likely to rise more rapidly in the wake of the pandemic. This is especially so in Iceland, a country which has experienced an explosion in tourism since the late turn of the century (Icelandic Tourism Board, 2021). One of the primary reasons for the burst in Iceland's tourism sector pre-COVID is due to its natural environment, and especially its pristine and wild character, which has long formed the backbone of the country's tourism industry. This is reflected in the many marketing slogans for Icelandic tourism including: 'all natural', 'unspoiled wilderness', 'pure nature', and 'Europe's last wilderness' (Saeþórsdóttir & Karlsdóttir, 2009;Jóhannesson et al., 2010;Ólafsdóttir et al., 2016).
As a concept, wilderness is, however, heavily disputed, particularly in relation to the uses and management of wilderness areas in the Anthropocene (Saarinen, 2016(Saarinen, , 2019)). Still, no universally accepted definition of the concept exists. Wild and untamed nature represents, nonetheless, an environment that is becoming increasingly rare in our industrialized world and, consequently, a precious environment which a growing number of tourists are seeking. The value of areas where it is possible to get in close contact with nature is increasing, and especially so in wilderness settings where it is possible to enjoy solitude and tranquillity. In this paper we will generally refer to natural areas given that our sites of concern have varying degrees of wild and untamed nature.
Framed within the therapeutic landscape concept, this paper provides an autoethnographic account of a visitor's experience of three different natural landscapes in Iceland shortly after the country's fourth wave of the pandemic. This paper highlights how health (first author) and tourism (second author) geographers can work collaboratively to recognize, protect, and sustain the therapeutic elements of natural landscapes and, by so doing, inform sustainable nature-based tourism through illuminating the health benefits of natural landscapes. This is particularly important within the context of the growing naturebased tourism sector in the fragile Arctic and Sub-Arctic regions experiencing elevated climate change impacts. Iceland, located just south of the Arctic Circle, is a sparsely populated country with only 376,000 inhabitants (Statistics Iceland, 2022) sharing a land area of 103,000 km 2 . Throughout Icelandic history inhabitants have mainly been located along the coastline, leaving the interior highlands an uninhabited wilderness. Over the course of the past few decades, tourism has grown rapidly in Iceland; for example, 1950 had approximately 4,000 international visitors compared to nearly 2.4 million in 2018, which is sevenfold the country's population the same year. The number of international visitors dropped somewhat in 2019, and then collapsed in 2020 due to COVID (Fig. 1). In 2021 the number rose to nearly 700 thousand (Iceland Tourist Board, 2022). The escalating growth in tourism since the turn of the century triggered overtourism in some of Iceland's most popular destinations (Ólafsdóttir et al., 2021). There are many indications that tourism will grow rapidly again as travel restrictions due to the COVID epidemic are eased. Likewise, natural destinations like Iceland will be highly sought after following a long period of restraint. As countries open, and travel becomes accessible again, there is an opportunity to reimagine sustainable nature-based tourism from a therapeutic landscape lens. Furthermore, using an autoethnographic account of a visitor's experience of three diverse natural landscapes in Iceland, the therapeutic benefits of these natural environments are better understood. The three natural sites selected to fulfil the aim of this study include: the southern part of the Westfjords peninsula, the Central Highlands, and the Jökulsárlón glacial lagoon (Fig. 2). Both authors travelled together to the Westfjords. The autoethnographic visiting researcher travelled to the Jökulsárlón glacial lagoon on her own and was accompanied by an Icelandic super jeep driver in the Central Highlands. The first author is the visiting health geographic autoethnographic researcher with knowledge about therapeutic landscape theory. As a tourism geographer and a native of Iceland, the second author provided important country context while situating the findings within the larger nature-based sustainable tourism literature.
A short review of the literature pertaining to therapeutic landscapes will provide the background for the paper. Next, the autobiographical approach will be described. Results are presented via three themes: restoration, awe, and concern. Implications for nature-based tourism in a post-COVID era are discussed. The methodology and results are written in first person, given that they were experienced by the first author, as a visiting researcher to Iceland.
---
Therapeutic landscapes and nature affiliation
Recognized as significant contribution that health geographers have made to the broad study of health, therapeutic landscapes theory is a conceptual framework for the analysis of physical (natural and built), social and symbolic environments, as they contribute Vol:. (1234567890) to physical and mental health and well-being in places (Bell et al., 2018;Gesler, 1992). Traditionally, applications have been realized in the following four areas: traditional landscapes (i.e., reputed for health and/or healing, such as shrines, pilgrimages, etc.), natural/ pristine/wild areas, landscapes for the marginalized (i.e., mentally ill, autistic, etc.), and applications to health care sites (i.e., hospitals, long-term care facilities, etc.) (Williams, 1999(Williams, , 2007;;Bell et al., 2018). A 2018 scoping review of the therapeutic landscape literature noted that, as a health geographical concept, therapeutic landscapes: offer in-depth insight into experiential, embodied and emotional geographies; promote awareness of place as both therapeutic and exclusionary, and; continues to be a relevant and lively field of inquiry across health geography (Bell et al., 2018).
Certainly, natural environments have long been shown to be therapeutic landscapes (Williams, 2007).
Natural pristine and wild places of geographical splendor may also be understood as an important cultural ecosystem service. Cultural ecosystem services provide a range of varied benefits to humans, such as recreational, tourism, aesthetic, and spiritual benefits, all which enhance human and physical wellbeing (Millennium Ecosystem Assessment, 2005). In totality, cultural ecosystem services intersect with therapeutic landscapes in the many benefits places have, all of which contribute to enhanced mental and physical wellbeing.
Within the ongoing evolution of therapeutic landscapes theory, recent focus has been the examination of the coloured elements of landscapes, or the palettes of place (Bell et al., 2018); the colours blue and green have been particularly highlighted as promoting health and well-being (Foley et al., 2019). Within the context of Iceland, the significance of white and black landscapes was discussed by Brooke and Williams (2021), who recognized that many other natural landscapes of various colours have yet to be fully explored with respect to their therapeutic value, such as blue, green, yellow, brown, and grey. Azevedo (2020) recognizes wilderness as being exemplified by a range of colours, including white, blue, green and gold. We introduce a range of colours here within the context of health-enabling places.
---
Methodology
To explore the beneficial impact of Icelandic landscapes in relation to tourism, an authoethnographical approach was used. Geographers Butz and Besio (2009) argue that authoethnography enables authors to become part of what they are studying, where the research subject becomes re-imagined via 'reflexive narrators of self'. Autoethnography as method is becoming increasingly applied in human geography (Butz, 2010;DeLyser, 2015). Scarles and Sanderson (2016, p. 261) note in their work on tourism studies, that '…in autoethnography, subjectivity becomes constructive rather than destructive; accessing 'hidden' spaces, stimulating creativity and deepening connection … therefore, the researcher is becoming researched and this process can ultimately enable a far richer research engagement and insight.' A number of health geographers have used autoethnography to capture the experience of place within the context of therapeutic landscapes. For example, Thompson (2021) employs autoethnography to reflect on personal encounters with digital health, and, in so doing, illustrates how digital health disrupts existing, and creates new, therapeutic landscapes. Liggins et al. (2013) use autoethnography to reconsider the inpatient unit as a place of healing and, in so doing, attend to not only the material world but the world within. Both these papers encourage the further use of autoethnography in therapeutic landscape inquiry.
The autoethnographic approach (Adams et al., 2022;Chang, 2008) used in this study not only allowed reflection on the experience of immersing within the three selected study sites, but provided the opportunity to observe others experiencing the sites of concern. When selecting the study sites, it was considered that the sites reflect different geographical landscapes in terms of nature and accessibility. Thus, the southern Westfjords reflects a rural cultural landscape located outside the popular ring road, the Jökulsárlón glacier lagoon reflects a popular tourist destination located by the ring road on the southeast coast, and the Icelandic Central Highlands reflects a wilderness area with limited access, especially during the winter months. Visiting in the fall, specifically in the month of October, these three sites were targeted in the following order: the southern Westfjords (3 days), the Jökulsárlón glacial lagoon (3 days), and the Central Highlands (3 days). All three sites were accessed by car or super jeep.
In keeping with autoethnographic method, the 'I' pronoun will be used from hereon in to describe the first author's experience of the Icelandic landscape as a visitor. Data collection was similar across each site and included observation and documentation of the landscape and the associated activities and feelings experienced, which was facilitated through copious field notes, poetry writing, picture taking, and videorecording. This was complimented by observing and documenting the various activities that others at each of the sites concerned were engaged in. Therapeutic mobilities were limited to driving, walking, hiking, and boating. At times during this project I did feel like an interloper, as I was not only a visitor but also a researcher. Autoethnographic data captured the lived experience of these places through observing, hiking and being fully immersed in these sites. I used the power of observation in each of these sites, being cognisant of all senses, and the thoughts and feelings that were experienced.
The primary field data were my field notes and poems, which were entered into my electronic field book multiple times each day; the other data types supported these field notes. Analysis of data followed thematic analysis, which systematically organized and identified data into meaningful themes (Braun & Clarke, 2012). Thematic analysis procedures included creating preliminary codes, which were assigned to the data in order to describe the content. Themes were organized across the data from these preliminary codes. The themes were then reviewed, defined and named (Braun & Clarke, 2012). Following qualitative guidelines for assuring reliability and validity, research findings came to be transferable, dependable, credible and confirmable. In totality the natural, built, social and symbolic environments worked in synchronicity to produce three themes. The thematic results are presented using descriptive realist narrative and imaginative-creative poetry (Chang, 2008). Descriptive realist narrative portrays places and experiences as accurately as possible, while imaginative-creative writing allows the use of creative energy to express the autoethnographic experience in a range of genres, such fiction, poetry and drama (Chang, 2008). When combined, descriptive realist narrative and imaginative-creative writing create autobiographical poetry that depicts places and experiences in prose. Given the importance of situating the ethnographic researcher, the positionality and reflexivity of the first author is now briefly described.
I was introduced to the therapeutic landscape concept via Wil Gesler's writings (1992) when studying as a doctoral student in social and health geography.
Although not my central area of study, I became quickly captivated by the concept and engaged with the health geography community in developing it further. I published my first edited collection on therapeutic landscapes in 1999 (Williams, 1999); this laid the groundwork for a second collection published in 2007 (Williams, 2007). I was particularly interested in increasing engagement with the lesser-known spiritual aspects of therapeutic landscapes, engaging in pilgrimage sites both in real time (Williams, 2010) and virtually (Williams, 2013). I had the privilege of supervising numerous graduate students on the topic, ranging from green spaces for university student mental health (Windhorst & Williams, 2015a, 2015b), through to housing for both families with autistic children (Nagrib & Williams, 2017, 2018), and immigrant carer-employees working from home (Akbari & Williams, 2022).
A large part of my engagement with the therapeutic landscape concept was due to my interest in health geography, quality of life and well-being, as well as my own well-developed nature affiliation. My affiliation with nature began as a child, where I spent copious amounts of time outside, engaging in a wide range of natural spaces usually in an active way, whether gardening, cycling, swimming, cross-country skiing, playing a range of games, such as badminton, softball, golf, and hide-and-seek. My parents had the foresight of purchasing a cottage near Canada's longest fresh-water beach when they started their family. Here, my siblings and I spent every summer and most weekends, as the majority of our out-of-school time was spent at the cottage, and much of it outdoors.
As one of my former graduate students revealed in his work with university student's nature affiliation, positive experiences growing up in natural places have long-term mental health benefits given that they nurture connectedness with nature throughout the life trajectory (Windhorst & Williams, 2016). In this work, it was found that nature connectedness had a positive and significant correlation with students' self-recalled positive childhood nature experiences, such as proximity to expansive, accessible natural places, and shared family engagement and valuing of these places.
My love of nature was nurtured further in high school, where I received summer employment as a junior conservationist. Spending time hiking in nature was a favorite pastime, and I jumped at the opportunity to go hiking elsewhere, whether outside my own community in the city or at my family's cottage. In addition to hiking, long-distance cycling throughout North America was another therapeutic mobility that provided nature affiliation while in university. Having my own family provided the opportunity to raise my kids with many of these same activities, who now have high nature affiliation. As I move into my middle-age years, I am fortunate to have the opportunity to engage in work to conserve and protect natural places for future generations. This provides purpose and meaning while continuing to nurture my love of nature.
Having had the opportunity to visit Iceland in 2018, driving the famous 'ring road' around the island in search for coloured therapeutic landscapes, I was familiar with the white landscapes of the glaciers and ice, geothermal steam, and thousands of grazing sheep-all which contrasted with the black sand and basalt evident throughout the island (Brooke & Williams, 2021). The inconceivably peculiar but wild mountainscapes of Iceland quite consistently reflect this colour palette of white and dark colours. Going back to Iceland in 2021, I had the opportunity to spend an extended amount of time visiting what I had perceived in 2018 as three of the comparatively more wild and pristine areas of Iceland. Each of the selected sites will now be briefly described in turn.
Iceland's Westfjords are often described as untapped nature, given that the famous ring road that circles the island does not include the many fjords making up the larger three fingers of the Westjords. Many would say that the lack of ring-road access has protected this part of Iceland, and particularly the northern tip of the Westfjords, which is primarily made up of the Hornstrandir Nature Reserve. This northern finger also contains Drangajökull, one of the many glaciers in Iceland. My visit was focused on the southern finger, and specifically the area around the largest populated village in this area, Patreksfjörður. An early snowstorm postponed the trip twice but, once there, provided a white snowscape to experience. I accompanied two Icelandic researchers on this 3-day excursion, both of whom were interested in land-use conflict between industry, tourism and other purposes. I too engaged in this work while visiting, as it provided a deep understanding of the villages we visited. We drove a rental vehicle with four-wheel drive in order to manage the snowy conditions of the roads. One of these researchers is a tourism geography researcher and co-author of this paper.
The second site, Jökulsárlón glacial lagoon became one of Iceland's most popular tourist site in the years before the pandemic. Many travel books on Iceland describe the Jökulsárlón glacial lagoon as bewitching given the sense of awe it engenders amongst visitors. The glacial lagoon is located south of Vatnajökull, which is Europe's largest ice cap; its current size is approximately 18 square kilometers. It is attached to the Atlantic Ocean by a short waterway less than a kilometer in length. The lagoon has a very short history, both geologically and historically. It began to form around 1930 as a result of the retreat of the glaciers south of Vatnajökull ice cap, following the end of the Little Ice Age; the lagoon has been expanding rapidly since (Björnsson et al., 2001). Along with the growth in tourism in Iceland, the popularity of the site as a tourist destination has steadily increased. At this timeless site, the glacial water and glacial chunks flow into the Atlantic Ocean, leaving pieces of ice on the black coastal Atlantic beach. This beach, affectionately called Diamond Beach, is almost as popular as the lagoon. The lagoon shapes Breiðamerkursandur, the glacifluvial sand plain south and west of it. The glacial lagoon has characteristically cold blue glacial waters dotted with icebergs from the surrounding Breiðamerkurjökull glacier, an outlet glacier from the Vatnajökull ice cap. Seals visit the fish-filled lagoon. The lagoon is often described as one of the many natural wonders of Iceland not to be missed. I spent 3 days at this site on my own. While hiking along the lagoon up to the boundaries of the glacier, I spent hours observing the site at many different angles and altitudes. I took a Zodiac boat tour to get a fulsome understanding and experience of the expansive, deep lagoon, as well as the glacier tongue.
The third site visited, the Icelandic Central Highlands, was the most anticipated given that it is perceived to be the comparatively most wild. The Central Highlands are primarily located in the center of the country and make up a vast area of wilderness. As the country's population are primarily found in settlements near the coast, with nearly 70% living in the Capital area (Statistics Iceland, 2022), the Highlands are primarily visited for outdoor recreation such as hiking, hunting, jeep touring, and cross-country ski touring. Part of the Highlands is accessible to all kind of cars in the summertime. In the wintertime, the area is less accessible, as there are no road maintenance services. Consequently, the Highlands are only accessible by super jeep in winter, given that they are equipped with 4 wheel-drive and have balloon tires that can be deflated for better traction in the snow. Grazing of animals, especially in the Highlands, has been greatly shortened in recent decades in order to protect the fragile vegetation; grazing is only allowed during the summer months. Iceland's Central Highlands is often described as the largest span of wilderness in Europe. In years past, many in Iceland have tried to establish it as a national park and continue to advocate for its legislation. I was accompanied by an Icelandic super jeep driver who was very knowledgeable about the Highlands given that he had been a former member of Iceland's Search and Rescue Team and, consequently, able to manage the worst weather and storm conditions.
---
Thematic results
In keeping with the therapeutic landscape concept, the natural characteristics of the three sites studied were vibrantly clear, being augmented by built elements. The therapeutic social and symbolic characteristics of these environments were less apparent at first, only showing themselves as time was invested in experiencing the sites. Given the centrality of mountains as a key component of the landscape across all three sites, I was reminded and often felt their symbolic meaning throughout my visit. As with other mountainous regions across the world, Icelandic mountains not only symbolize strength, greatness and permanence, but also proximity to a heavenly existence and good health. Although perceived as claustrophobic by some, the general perception is that mountainous environments are healthful. A proponent of high-altitude medicine, Auer (1982) wrote that the stimulating powers of sun and snow in high altitudes exerts an influence on physical health, noting that a stay in high altitudes has been a long-standing medical prescription, remaining valid today with doctors recommending stays in high altitudes as good preventative medicine. He also summarized the spiritual rewards of an alpine environment: 'He who knows how to open his mind and his heart in the mountains and to the mountains will be richly rewarded.' (Auer, 1982, 18). There is no question that the mountains made me feel a sense of awe in their size and beauty, especially when painted with the various colours of white, grey, black, and green. Climbing mountains often symbolizes overcoming obstacles and making progress, as it did for me, described below as restoring my attention, reducing my stress and enhancing a more positive outlook.
The primary colours in the visited sites were white, black and green, although yellows, greys and browns were also evident in certain locales, providing bursts of colour. In totality, the natural, built, and symbolic environments worked in synchronicity to produce three thematic results: restoration, awe and concern. Overall, the thematic results, written below using descriptive realist narrative and imaginativecreative poetry (Chang, 2008), provided reduced stress, renewed attention, as well as enhanced physical, and psychological benefits for the researcher, as discussed in the first and dominant theme Restoration. Awe fed into feeling restored, while feeling more restored allowed me to experience greater awe. Feeling both restoration and awe engendered a sense of concern for the future of these natural landscapes.
---
Restoration
Having been the glue that held my four-person nuclear family together safely and in good healthy throughout the first 1.5 years of the pandemic, I was ready for an adventure. The monotony of virtual teaching and working under lock down, coupled with the ongoing tasks of domestic management, and kids learning on-line due to stay at home orders took their toll on my wellbeing. Further, as a caregiver to my closest uncle, aged 88 years, who was living in longterm care and had experienced numerous outbreaks and subsequent lockdowns, I felt a great deal of worry and concern. As a university professor and parent, I was experiencing many of the symptoms of burnout characteristic of human service professionals: poor physical health, cynicism and negativity, all of which were reflected in a lack of energy and motivation, a change in sleep quality and appetite, and a loss of satisfaction from activities that previously were enjoyable (Kahill, 1988). In addition to needing a change and break from the everyday mundane activities of the pandemic, I was starving for wide open natural spaces.
In addition to feeling burnt out, I experienced a serious knee injury 5 weeks before travelling and had only 3 weeks of physiotherapy to heal and strengthen it. Luckily, I was able to purchase a knee brace for the trip. Throughout the trip I continued my regular daily exercises to ensure my knee was strong enough to endure the arduous hiking I was planning for. I did manage to complete all the hikes and walking tours I set out to do, but it was painful at times! The limited hiking I did in the Westfjords and Central Highlands was primarily done with others accompanying me on these trips. These trips primarily include 1-2 h hikes, often on challenging terrain; having my hiking poles close by at all times assisted both my balance and confidence! As a consequence of having company on these hikes, there was a strong social component. Conversation often focused on the geological history, culture, or the people that characterized the place in which we were hiking or, in the case of the Westfjords, the research we were involved in there specific to the forces of globalization causing land-use conflict between industry and tourism. As a result of this project, which included collecting data via stakeholder interviews and focus groups, I was fortunate to speak to a number of Icelanders, both native and newly settled; consequently, I was able to more fully understand the forces at play that made the place work. I felt fortunate to be part of the research project as it allowed me to gather an 'insiders' understanding of the community and brought to life the physical geography of the fjords and surrounding geography. The Central Highlands experience also contained a strong social component, in that I was accompanied by a native Icelander who had a great depth of knowledge specific to the geography, culture and history of Iceland. He was generous in sharing his knowledge, which was often shared in the form of stories. He also knew the Highlands intimately, sharing the special places rarely written in tourist guides. We did quite a few hikes together, and he provided two opportunities to do two shorter solo hikes. Similar to the hiking in the Westfjords, conversation focused on the geography, culture and history of the places we were visiting. The solo hikes felt comfortable and safe given the safety net of knowing he was waiting at the end point. Although the social element in both the Westfjords and Central Highlands provided a more intimate understanding of both places, and even though I am still in touch with the folks who I travelled with, I still felt very much like a visitor, an outsider looking in.
The hiking I did around the Jökulsárlón glacial lagoon spanned both east (2 h) and west (4 h) of the main artery leading to the Atlantic Ocean. The eastern hike was shorter, following the lagoon's beach. The western hike was more variable, crossing flood plains and hills. The western hike reminded me of the week-long tour de Mont Blanc that I did in the European alps as a university student -the scenic views of the mountains were ever changing. As I was alone in this site, I felt a strong sense of adventure, but also vulnerable given the vast and expansive landscape; into the second day of my stay I was yearning for social connection. What follows is a poem I wrote, describing the vulnerability of being alone and injured on these hikes:
---
Trails of the Jökulsárlón Lagoon So many trails to choose from, Will my body hold up? Climbing and climbing, Knees and hips jarring with every step. Can I do it? Am I fit enough?
Where is my balance? Use the hiking poles. Are my hiking boots sturdy enough? Pace yourself This is not a race.
Both hikes gave me a sense of the size of the lagoon, as well as the ecological processes at work. On both hikes I experienced the loud sound of glacier pieces calving, breaking into smaller pieces and hitting the water. On each occasion, a loud crack was heard, followed by the sound of the waves produced by the smaller pieces falling into the water. These calving processes are unplanned, as the glacier pieces calve at all hours and times of the day; I felt lucky to have experienced this, being present to and observant of the natural ecological processes at work. Otherwise, the soundscape is dependent on where you are located in the lagoon; there is the either silence accompanied by birds singing, or the gurgling of water under and around the glacier pieces. The cold arctic wind blowing down from the highlands was constant, being extremely strong at times and literally blowing me forward or backward on the trail, depending on which way I was travelling. I stopped to rest about every 20 min or so, finding a rock to sit on while observing the landscape. Here I listened, watched, and felt. Each time I was struck with the beauty and serenity of the experience. Artic terns flew above me, while water streamed out of rocks, flowing into the lagoon. The glacial ice floating in the lagoon melted before my eyes, provided a sense of timelessness, like I was in a backwards time warp. The experience of watching the ice melt as it floated toward the Atlantic Ocean felt like a sped-up movie reel; century old ice melting in a matter of minutes. Therapeutic mobilities were limited to walking/hiking, and boating in the lagoon.
Both the Westfjords and Central Highlands, with their majestic wild scenery, extent, and wide-open spaces, provided not only relief from stress but renewed attention, reflected in the goal setting I did in both these environments. Given the rugged terrain in all three places, hiking provided enhanced physical benefits while nature provided psychological benefits.
---
Awe
Similar to the work of Pearce et al. (2017) who explored tourists views of a natural part of Tasmania, Australia, a range of awe-inspiring experiences were felt when visiting the three chosen sites, including: (1) vast geological landscapes (i.e., mountains and glaciers), ( 2) aesthetics (i.e., sculpted mountains, fjords, and glaciers of various blues/whites and transparencies) and fauna (i.e., sheep, seals and birds), (3) ecological phenomena (i.e., movement of tree line to higher altitudes, erosion of mountains, calving of glaciers, movement and flow of glaciers, tall waterfalls), and (4) reflective/perspective moments (i.e., mortality, timelessness). Each of these will be discussed in turn.
---
Vast geological landscapes
Iceland is made up of vast geological landscapes, which are characteristic of all the three sites of concern herein. The southern Westfjords have majestic mountains surrounding every fjord. The drive to and from the southern finger of the Westfjords was exceptionally scenic, as the road follows the coast of each uniquely beautiful fjord. Although the size of the mountainscapes made me feel miniscule, the beauty of each fjord made me feel and that I was in a perfectly wild place, with the greens, blues, whites and blacks working so beautifully in combination. I felt a great deal of gratitude for having experienced such beauty.
The Central Highlands are made of what appears to be a never-ending range of white, glacier-topped mountains with great extent and much folklore. In snow-covered Hveravellir, where we stopped for the first night, I received a tour of the many hot springs and spent some time in the geothermal pool. Here, the awesome story of the legendary Fjalla Eyvindur, which translates as "Eyvindur of the Mountains" (Iceland magazine, 2015) was shared. Eyvindur is the most well-known Icelandic outlaw in history, being the source of numerous myths and stories. Eyvindur fled into the highlands in the mid-1700s, being accused of stealing. He lived there for 20 years, evading the state that were consistently on the hunt for his whereabouts. The Westfjörds also has folklore about him, as Fjalla-Eyvindur lived there for a few years. These stories are taught in Icelandic elementary schools, as the tail of the outlaw is a story of resilience, determination, strength of body, spirit, and mind. How he could live for so long in such a vast, sparse and wild environment was bewildering to imagine.
The Jökulsárlón glacial lagoon is known as the jewel of Iceland for the tourism industry, having a backdrop of the massive white Vatnajökull glacier and the white and grey mountain ranges making up the Vatnajökull National Park. As with the mountains in the Central Highlands, Westfjörds, and in all parts of the world, there is a feeling of majesty and magnificence when looking at them. The mountain ranges and glaciers contribute to the aesthetics of these natural landscapes, as do the waterscapes, skies, and landform shapes; all reflect the nature-based colours of blue, white, green, brown, red, grey, and black. I felt beauty surrounding me in each of these sites, often having trouble deciding in which direction to look.
---
Aesthetics & fauna
The large spaces, extent, and uninterrupted spectacular natural views made me feel remarkably small and insignificant. The many fjords that make up the southern Westfjords were an icy blue, surrounded by steep mountains of grey, red and black -all of which were spotted with cascading rivers and green meadows. Sea birds were abundant in the fjords, making it an international birding destination. The many hues of blue and white within the Jökulsárlón glacial lagoon provided great contrast to the grey and black seals sunning themselves on the glacial ice. The large white glaciers atop the grey and black mountains in the Central Highlands were stunning given the vast brown and black rock deserts that stretched out on either side of the single-lane two-direction dirt road. Such multicoloured natural wilderness gave me a strong feeling of being alive, reflected in a experiencing a sense of excitement, adventure and hope for my life's future.
---
Ecological phenomenon
A wide range of ecological phenomena, most related to climate change and glacial melt, take place in each of the sites of concern, but the visual effect is most rapid in the Jökulsárlón glacial lagoon. As referred to earlier, the lagoon experiences the calving of glaciers, as well as the movement and melting of the resulting icebergs out to the Atlantic Ocean. Various wildlife habitats are also evident, including a range of birdlife, fish, and harbor seals, the latter which congregate near the mouth of the lagoon to catch fish. One of the most fascinating processes is the melting of the many 1000-year-old glacial ice melting into the Atlantic Ocean within a 6-month window. The lagoon Vol.: ( 0123456789) is perpetually growing, being formed naturally from melted glacial water; big blocks of ice calve off the ever-shrinking glacier. The rapidity of this ecological process is not only awe inspiring, but astonishing. The lagoon grows approximately 300 m each year, as the glacier's tongue recedes, with a greater degree of calving and a greater volume of meltwater.
---
Reflective/perspective moments
The reflective moments in each of the natural sites were many. The Westfjords provided the first geothermal bath of the trip; located high in the mountains, it was a sight to watch the sky turn from blue to orange, then pink as the sun set over the fjord. Geothermal baths, recognized by others as socially and culturally responsive therapeutic landscapes (McIntosh et al., 2021), were a common occurrence in the Central Highlands as well, given the abundance of geothermal activity all over the island. Time in a natural geothermal bath provided time to reflect on the day, while providing restoration to sore muscles, and a full but tired mind. The timelessness of the Jökulsárlón glacial lagoon was where I simultaneously felt my mortality and brevity of life, together with gratitude for the beauty of the place. The expansiveness of the Central Highlands, with its many glaciertopped mountains and miles of moraine desert had great extent, provided perspective on the importance of sustainable development. As beautiful as the land was, it was the sky that seemed to come alive given its ever-changing nature, whether in the many hues of blue, white, orange and pink. The following poem was written about the sky in the Central Highlands:
---
Highland Sky Constant change, like a symphony of sound Rain or snow falls Clouds move in and over mountain tops Mist falls in waves, wetting my face
The sun peaks out to join the orchestra Playing hide and seek with the other elements Before succumbing to mastering the sky A rainbow presents itself as the climax of the concerto Together with a backdrop of blue, grey and white Were these the same skies of Eyvindur of the Mountains?
---
Concern
Concern was felt in all three sites, due to the effects of overtourism (Dodds and Butler, 2019), climate change impacts, and the Anthropocene. This was evident and symbolized by: the litter at the Jökulsárlón glacial lagoon; land conflicts with tourism in the Westfjords, and; a growing network of roads in the Icelandic Central Highlands. The expanding road network in the Highlands unlocks these wild landscapes for infrastructure development and resource exploitation (Saeþórsdóttir & Ólafsdóttir, 2017;Tverijonaite et al., 2018), primarily in the areas of tourism and energy harnessing (Tverijonaite et al., 2019). All three sites, and particularly the glacial lagoon and Central Highlands, showed clear signs of being impacted by climate change. As with elsewhere in the world, the Icelandic glaciers are melting at an alarming rate, having clear impacts on nature-based tourism, especially in the southeast of the country (Welling et al., 2020). The Breiðamerkurjökull glacier started to retreat due to rising temperatures, and between 1930 and 1940 the first signs of what now is known as Jökulsárlón glacier lagoon became evident. The lagoon has been growing in size ever since, due to a warming climate. Rising temperatures continue to shape the lagoon, which is currently the deepest lake in Iceland and growing four times larger since the 1970s (Björnsson et al., 2001;Guðmundsson et al., 2017). In addition to receding glaciers, the warming climate is evident in the slow creep of the tree line, evident in the new growth of native birch trees, albeit stunted.
Tourism ballooned in Iceland at the turn of the century, following the country's economic collapse. The central government initiated a highly successful marketing campaign to boost tourism in Iceland. Consequently, many of the most popular hiking trails are over trodden. This is, for example, the case for the long-distance hiking trail between Landmannalaugar and Þórsmörk in the Southern part of the Central Highlands of Iceland. To illustrate the rapidity of the growth of the tourism sector, the second author recalls setting this trail with direction markers back in the 1980s and, in 2021, the trail is recognized as one of the most popular in Europe. Over trodden trails are not the only concern, as congestion of both people, vehicles, and buses at the Jökulsárlón glacial lagoon and Diamond Beach has initiated the building of two additional paved parking lots on the Diamond Beach side of the Ring Road.
Tourism people and people bring pollution. Both in the Jökulsárlón glacial lagoon and in the Central Highlands, I found myself picking up garbage, and specifically plastic bottles, bags and rope of various kinds. Tourism has brought garbage, and plastic specifically, over what was formerly known as pristine wilderness. Many tourists appeared to be visiting the Jökulsárlón glacial lagoon to check the site off their list of places to see while visiting Iceland. This idea of a tourist site checklist was brought up in discussion with community stakeholders in the Westfjords as well. I felt concern and worry over the changes that tourism has brought to these natural environments, in the form of building, road construction, sign pollution, and all other forms of pollution -garbage, sound, and plastic.
The developing road network in the Central Highlands is a sign of potential infrastructure development. Visiting one of the oldest and most remote hiking huts in the Central Highlands (constructed in the 1940s) demonstrated the very basic accommodations once available -bunk beds and an outhouse. Further up the main dirt road is Hveravellir, a reconstructed hiking hut located beside a natural hot spring area. The newer hotel beside it is equipped with intranet, flush toilets, hot water showers, and sophisticated kitchen facilities, all of which have become available year-round since a power and intranet line was dug across miles and miles of the Central Highlands a few years ago. Road access allows such infrastructure to be built, providing the foundation for further development. Although there are many protected areas within the Central Highlands, the whole area is not currently protected. The increased interest in experiencing this last bastion of wilderness in Europe continues to grow. This is evident in the large busses, equipped with all-wheel drive and balloon tires, taking tourists into the Highlands for day trips through to winter, when accessibility becomes more limited. I felt concern over the need to protect the whole area, given the growing popularity of the Central Highlands as a nature-based tourist destination. Driving along the snowy road toward Hveravellir we rescued two tourists whose vehicle had gotten stuck in the snow; they had been stranded for some time. Although they had rented a 4-wheel drive vehicle, they wrongly assumed they were able to transverse the highlands. Once out of the snow, it was suggested they head down to the coast and to not veer from the Ring Road. According to my driver/guide, such an incident is commonplace in the Highlands, happening far too often.
Sheep farming is one of the largest agricultural sectors in Iceland, with thousands of sheep freely roaming during the summer months in most areas outside of the capital area of Reykavik. Given that rural areas make up 90% of the island, there is much opportunity for sheep to freely roam. Sheep are very much a part of the natural landscape, whether in farmyards, fenced off fields, or grazing near the roadside or up in the Highlands. Gorman (2017) discusses the need to incorporate animals in the study and understanding of therapeutic landscapes, suggesting the need for them to be understood as co-constituents and co-participants of therapeutic spaces. Using examples of how animals are agents in the therapeutic encounter for children with learning challenges, such as dyslexia and ADHD, Gorman highlights the needs to 'bring the animals back in' (p. 329). Certainly, the sheep give life to the landscape, not only with their movement but with the pleasant 'singing' sounds they make. Hiking past the yews and lambs was a real treat, allowing close proximity and observation of the beautiful animals. Although there are black sheep amongst the many white sheep, most of the Icelandic sheep are white. Farmers keep the yew and lambs together on the farm early in the spring. Once the lambs are established and firmly bonded with the yew, the sheep are let out to freely roam the pasture all summer long, colouring the landscape with various intensities of white, depending on the number. They are collected in the fall using dogs, horses, and various mechanized vehicles. Sheep were still roaming free when I visited the Westfjords in October, when I wrote the following poem: Although all three sites had unique therapeutic characteristics which contributed to healing burnout, the Westfjords were comparatively most healing, followed by the Central Highlands and the lagoon. This order was primarily due to the influence of the extent of the social component in the first two most healing sites, complimenting the natural and symbolic components.
The results of this study speak to one of the most immediate discussion points, that being the need to act to encourage sustainable tourism development, especially in the Sub-Arctic and Arctic where changes due to climate warming are happening at a faster rate than elsewhere. The concern over how natural environments are being impacted by climate change motivates action, whether via micro or macroscale changes. This certainly occurred for me with respect to my everyday life post-travel, as I consequently swapped my clothes drier with a drying rack, purchased an electric car, and accepted an invitation to join the Board of Directors for a community food garden that I volunteer for in the summer. This transformative impact provides promise that the same will be experienced by the thousands of millions of travelers seeking nature-based tourism, further building awareness of our environmental impact and building agency around sustainability, and sustainable living more broadly.
---
Discussion
Using an autoethnographic approach to the experience of Icelandic natural areas through the lens of therapeutic landscapes provides four discussion points. First, as with earlier work on wilderness as therapeutic landscape (Williams, 1999;Windhorst et al., 2015aWindhorst et al., , 2015bWindhorst et al., , 2016;;Bell et al., 2018), the first thematic finding provides further evidence that nature provides cognitive restoration. The autoethnographic researcher's experience of the Icelandic natural landscape via the three study sites was restorative. This feeling of being restored was evident when returning back home feeling refreshed, re-energized, and ready to engage in both academic work and family/ home management/care work once again. Given that therapeutic landscapes theory understands place as a key element in engendering health and wellbeing, through physical (natural and built), social, and symbolic environments (Bell et al., 2018;Gesler, 1992), the three sites of concern herein highlight the healing power of the natural environment. Employing autoethnography, this paper has emphasized the experiential, embodied and emotional geographies from the researcher's perspective via three thematic findings. Awe feeds into feeling restored, while feeling more restored allowed one to experience greater awe. Feeling both restoration and awe engendered a sense of concern for the future of these natural landscapes.
Related to this is the variable of coloured landscapes, or 'palettes of place' (Bell et al., 2018). The healing blues and greens were found across all three sites, complimented by the awesome yellows, browns, and greys. Given the presence of water, recognized by environmental psychologists (Kaplan, 1995;Kaplan & Kaplan, 1989) and health geographers (Foley, et al., 2019) as the most healing natural component of landscape, the various colours of water in Iceland -whether blue, grey, bubbling white, or mineral rich green, was apparent across the three sites. Blue was the dominant colour of the ocean surrounding the many fjords in the northwest, and the coastline surrounding the lagoon in the south. Building on earlier work (Brooke & Williams, 2021), white was also dominant, given the vast amount of moving water, whether found in the many rivers and falls of the Westfjords, Central Highlands, or lagoon site. Mineral-rich green water was evident in the Central Highlands and the Westfjords, specifically in areas rich in geothermal baths and geysers. In addition to white water, white landscapes dominated each of the sites given the presence of thousands of sheep, and ice and snow via the substantial, numerous glaciers. The colour white often contrasts with other colours, such as: the black sand on Diamond Beach adjacent to the glacier lagoon, the green fields and pastures of the Westfjords, and the grey rock and brown soil of the Central Highlands.
A third discussion point addresses the possible exclusionary attributes of these sites (Bell et al., 2018). With respect to the exclusionary nature of the identified sites of concern, two points are most evident -cost and accessibility. Costs of travel in Iceland, like most of Europe, is expected to exclude many potential visitors; those who visit and travel in Iceland are generally well-off given the high costs of services when compared to many other equatorial tourist destinations. Surprisingly, access to the three sites of concern herein, as with most natural places in Iceland, is free, still having no entrance fee in place. Free access seems, however, to be slowly changing as a parking fee is now in place in a few popular tourist destinations, such as in Vatnajökull national park, and in Þingvellir national park. Increasing, a number of other sites, especially private ones, are requiring an admission fee and/or payment for using the restrooms (Øian et al., 2018). Increased tourism in Iceland has furthermore changed accessibility demands, both with respect to volume of access to all popular tourist destinations, and with increased accessibility for those in wheelchairs and/or who use mobility aids. One development noted by the autoethnographic researcher, since the earlier 2018 trip to the glacial lagoon, was the addition of four parking spots allocated to those with disabilities. These spots were strategically placed to allow access to the waterway trail between the Atlantic Ocean and glacial lagoon. This improved accessibility brings nature closer to more people, allowing greater inclusivity and therefore a greater number of visitors to reap the positive health benefits that nature provides. However, many researchers have shown that increasing accessibility to a greater number of tourists leads to landscape change, transforming wilderness to developed and often populated areas and, subsequently, also changing visitors' experience of it (Bishop et al., 2022;Haraldsson & Ólafsdóttir, 2018;Ólafsdóttir & Haraldsson, 2019;Tverijonaite et al., 2018). It is therefore a great challenge to manage nature-based tourism in wilderness settings like the Icelandic Central Highlands.
---
Implications for nature-based tourism in a post-COVID era
This study has many implications for nature-based tourism given that therapeutic landscape theory intersects with cultural ecosystem services. There are many benefits that natural places provide, one of which is the contribution they make to enhanced mental and physical wellbeing (Williams, 1999;Windhorst et al., 2015aWindhorst et al., , 2015bWindhorst et al., , 2016;;Bell et al., 2018); this benefit provides considerable opportunities for nature-based tourism in a post-COVID era. After 2 years of isolation and stay-at-home orders at various levels, COVID has opened our eyes to the therapeutic values of nature. As confirmed by the results of this study, research evidence notes that spending time outdoors in nature has been a critical factor enabling people to cope with the stress in and following the pandemic (i.e., Mental Health Foundation, 2021). Further, with the growing number and intensity of natural disasters brought on by climate change across the world, crisis such as the COVID pandemic has alerted us to how quickly our world can change. The experience of the pandemic amplified concern for climate change while increasing public support for a green recovery, suggesting bolder climate policies and greater interest in sustainability (Mohommad & Pugacheva, 2022).
As outlined by the United Nations Sustainability Development Goals (2021), sustainability is a key dimension in our planet's wellbeing which, in turn, impacts society's wellbeing. Sustainable development has been a key theme in Iceland's government tourism strategy for the past 30 years and, in so doing, acknowledges its importance for Icelandic tourism. In 2019, the Icelandic government set out an ambitious vision for Icelandic tourism through to 2030, where the goal was to become a leader in sustainable development worldwide (Government of Iceland, 2019). Although sustainable development is the goal, the economic dimension of sustainability has consistently been a leading force in Icelandic tourism, somewhat reflecting the laissez-faire approach largely guiding the tourism sector. Ólafsdóttir (2021) points out that with increased knowledge of sustainable development, there is greater acceptance that the three pillars of sustainability, defined as economy, society and nature, are part of a closed system; nature sets limits for societal growth, and society sets limits for economic growth. It is therefore necessary to understand the behavior of the system in order to know where the boundaries between these pillars are and, consequently, manage development so that it remains within a sustainable system. A holistic vision of all the influencing factors, and an understanding of how they interrelate, is therefore fundamental to the development of sustainable tourism.
As noted earlier, COVID has brought a time of pause for tourism, thus providing an opportunity to critically reconsider tourism challenges, such as overtourism and impacts of climate change (Gössling et al., 2021). Moreover, this pause has given precious time to better improve the planning and management of tourism in meeting the goal of the sustainable development for therapeutic nature-based well-being. For example, this time of respite allowed Iceland to renovate sites experiencing overtourism, such as the Jökulsárlón glacial lagoon. In addition to extending the number of parking spots, signed accessible parking spots have been allocated for disabled folks, making use of the space adjacent to the Jökulsárlón glacial lagoon on the Diamond Beach site.
In addition, achieving ecological sustainability, natural tourist destinations should involve interpretation, education, and enjoyment of nature, while bringing benefits to both visitors and local communities. Related to this is the issue of safety. In each of the three sites of concern, many of the features are viewed at the tourist's risk, with limited signage, direction, or rules. Trained as a National Lifeguard in Canada, the autoethnographic researcher often caught themselves worrying about the lack of safeguards, such as fences and other such barriers, at the sites of concern. The limited safety protection translates into somewhat of a hazard for some, such as those: with young children, shaky on their feet, or overly zealous photographers. Exploring whether this was a concern for others would be a useful follow-up study. Hence, when developing nature-based tourism, it is critical to develop site-specific zoning for the different market groups, using focal points as a management tool (Ólafsdóttir et al., 2018). In this way, it is possible to protect the most pristine wilderness (Saeþórsdóttir & Ólafsdóttir, 2017).
Iceland's nature-based tourism sites exist on a continuum, from minimally developed to fully commercialized. In the most remote and wild places, such as the Central Highlands, visitors must often rely on their own GPS navigation systems to find the many natural treasures. In the most popular Highland areas, there is a volunteer Search and Rescue Team stationed during the summer high season and, so far, they have been very quick to respond when they receive a call for help. Further, mobile phone connection is now available across a large part of the Highlands. Whether this means that the therapeutic value of the place improves (given a potentially enhanced feeling of safety) or declines (given that potential of sources of stress are much more proximal) is important to explore in a follow-up study. Although limited in number, Iceland has commercialized some of its natural sites. Near one of the entrances to the Central Highlands, two geological attractions are found: Gullfoss waterfall and Geysir geothermal area. In both these sites, we find a large food court, gift shop and multiple hotels. In fact, unlike many of the natural sites throughout the country, these natural sites have built paths, meant to steer the crowds. Although these commercialized sites are accessible to those with mobility aides, the autoethnographic researcher much preferred the sites which were not commercialized, as their natural quality and, consequently, their therapeutic value remained much more intact. Exploring the therapeutic benefits of limited development versus commercialization, as seen with mass tourism, provides yet another topic for further studying this realm.
---
Conclusion
Framed within the therapeutic landscape concept, this autoethnographic study of three of Iceland's natural sites adds to the understanding of the healing effects of the multi-colour natural landscapes of Iceland in a COVID era. In sum, the natural sites of the southern Westfjords, Jökulsárlón glacial lagoon, and the Central Highlands worked in synchronicity to produce three thematic results. The three themes of restoration, awe, and concern provided renewed attention, reduced stress, as well as enhanced physical and psychological benefits for the autoethnographer researcher. Further, this paper has highlighted how health and tourism geographers are well positioned to work collaboratively to sustain the therapeutic elements of natural landscapes, recognized as a cultural ecosystem service. In so doing, they can influence sustainable development and consumption through proper and appropriate planning and development of such tourism destinations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
---
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Social networking technologies are influential among men who have sex with men (MSM) and may be an important strategy for HIV prevention. We conducted focus groups with HIV positive and negative participants. Almost all participants used social networking sites to meet new friends and sexual partners. The main obstacle to effective HIV prevention campaigns in social networking platforms was stigmatization based on homosexuality as well as HIV status. Persistent stigma associated with HIV status and disclosure was cited as a top reason for avoiding HIVrelated conversations while meeting new partners using social technologies. Further, social networking sites have different social etiquettes and rules that may increase HIV risk by discouraging HIV status disclosure. Overall, successful interventions for MSM using social networking technologies must consider aspects of privacy, stigma, and social norms in order to enact HIV reduction among MSM. Although men who have sex with men (MSM) make up only about 2% of the U.S. population (Purcell et al., 2012), in 2010 MSM accounted for 78% of new HIV infections among males and 63% of new infections in all populations combined. HIV is reemerging as a serious epidemic among MSM (Beyrer et al., 2012 |
countries and segments of the population except in MSM (Centers for Disease Control and Prevention [CDC], 2012). Gay and bisexual men in the U.S. are 44-86 times more likely to be diagnosed with HIV compared to heterosexual men. More than half of HIV-positive young MSM are unaware of their HIV status (CDC, 2010). Further, only one quarter of the 1.1 million Americans living with HIV have appropriate access to care and a suppressed viral load (CDC, 2012). These data highlight the need to develop effective strategies to improve testing and healthcare utilization among MSM. One possible strategy for interventions among MSM is the use of social networking technologies.
---
SOCIAL NETWORKING TECHNOLOGIES
Social networking technologies are tools that allow users to create connections communicate, and share interests online (Gunawardena et al., 2009). Social networking technologies encompass all technological tools used for communication within networks including websites, mobile applications, video, and other media. The rapid expansion of smartphones has resulted in increased use of social networking technologies (Chernis & Wurmser, 2012). The MSM population has disproportionately embraced smartphone ownership over the past few years with 91% of gay males versus 63% of heterosexual males currently owning a smartphone device (Community Marketing Insights, 2009, 2013). Likewise, membership to social networking sites has increased more rapidly in lesbian, gay, bisexual, and transgender (LGBT) populations in recent years than in the general public (Community Marketing Insights, 2009, 2013). In 2013, 67% of gay men reported having visited an MSM-themed website/blog, demonstrating an increase of 34% across one year (Community Marketing Insights, 2013). This increased use provides an opportunity for social and behavioral researchers to disseminate health messages and implement interventions to curb behavior change using these new technologies.
Soon after its conception and dissemination, the internet has been a tool used by MSM to find sexual partners (Shaw, 1997) a practice that has continued to increase in popularity and complexity over the years. A survey in 2012 found that 46% of gay men used the Internet to meet new sexual partners (Grov & Crow, 2012) mostly through the use of mobile applications (apps). The most prominent mobile application described in the literature is Grindr. In 2009, this mobile app geared toward MSM introduced the use of geolocation features to communicate with nearby individuals and facilitate finding romantic or sexual partners. Ever since, similar mobile apps targeting sub-segments of the MSM population (e.g., Scruff, Mister, Recon, Adam4Adam Mobile, ManHunt Mobile, Dudes Nude) have also gained popularity among MSM worldwide (Landovitz et al., 2013;Rendina, Jimenez, Grov, Ventuneac, & Parsons, 2014).
---
SOCIAL NETWORKING PREVENTION TECHNOLOGIES FOR MSM
Earlier studies show that a high percentage of MSM use the internet and SNTs to seek health information (Magee, Bigelow, DeHaan, & Mustanski, 2012;Wilkerson, Smolenski, Horvath, Danilenko, & Rosser, 2010) making SNTs ideal for dissemination of information to this high risk group. SNTs also provide the perfect routes for behavioral interventions because they represent the primary mode of socializing and sexualizing for many young MSM (Harris Interactive, 2007;Horvath, Rosser, & Remafedi, 2008). Further, interventions using social technologies are advantageous for MSM because they bypass the need for face-to-face interventions, providing privacy, confidentiality, convenience, and reach that can increase the willingness of young MSM to participate in prevention and care services (Magee et al., 2012). Social technologies for HIV prevention represent an emerging phenomenon in critical need of efficient study while platforms are so popular and useful for health interventions among MSM. To date, several interventions have targeted MSM using new social technologies. A systematic review found that technology-based interventions (primarily text message based) for people living with HIV show promise for encouraging medication adherence, sexual risk reduction, decreased drug use, increase health literacy, and improvements in depressive symptoms (Noar & Willoughby, 2012).
Earlier studies have shown that HIV-positive individuals seldom discuss their HIV status with potential partners online prior to engaging in high-risk sexual encounters (Chiu & Young, 2015;Serovich, 2014). The lack of communication of sexual risk is mostly attributed to HIV stigma. Risk reduction interventions delivered via social media technologies have demonstrated low to moderate success in most populations (with effect sizes ranging from low to moderate in general and MSM populations; Gold et al., 2011;Noar & Willoughby, 2012). One randomized controlled trial found that interventions using social networking sites such as Face-book are acceptable and can be both effective in changing behaviors and increasing testing rates among participants (Young, Cumberland, et al., 2013). However, a different randomized trial using Facebook showed that behavioral change was present in the short term but returned to baseline in the long run, similar to the short term effects often found in nontechnology interventions (Bull, Levine, Black, Schmiege, & Santelli, 2012). Other studies looked into the way in which theoretical online interventions should look like to make them more appealing to MSM. They identified high interest in sexual health topics and sexually explicit content (Hooper, Rosser, Horvath, Oakes, & Danilenko, 2008). We need a better understanding of the way in which MSM use social networking technologies in order to customize them to achieve maximum effect within prevention programs. The objectives of this study are to: (1) explore how MSM and their social networks interact using social networking technologies; (2) uncover the perceived barriers to prevention programs using social networking technologies; and (3) explore ways in which a behavioral HIV intervention can be successfully implemented among MSM using social networking technologies.
---
MATERIALS AND METHODS
A convenience sample was recruited from a small urban area of New England (U.S.) from September 2013 to March 2014 by posting flyers at a health clinic that serves a predominant MSM patient population, local LGBT bars and establishments, and by posting a Facebook event on a MSM dance club's Facebook page. Interested participants contacted staff members for further information about the study and enrollment. Eligibility criteria included: (1) self-identifying as gay, bisexual, or a man who has sex with men, (2) 18 years of age or older, and (3) English-speaking. Further, we stratified our recruitment approach according to HIV status such that half our sample was HIV-positive while half was HIVnegative or unknown status. HIV-positive status was confirmed because all HIV-positive participants were recruited from a participating HIV clinic. The focus groups were co-led by two members of the research team, one who led the group facilitation and the other who took notes and asked probes when applicable.
---
INTERVIEW GUIDE
A semistructured interview contained four main themes: (1) participant's social networks, (2) technology practices, (3) HIV knowledge and communication, and (4) prevention as it relates to all three previous categories. The guide consisted of 24 questions (including probes and follow-up questions), with 8 pertaining to social networks, 7 pertaining to technology use, and 9 pertaining to HIV knowledge and communication. Most questions were open ended and meant to encourage open and honest participation by focus group members, and allowed for probe and follow-up questions. All sessions were audio recorded using a digital recorder. The focus groups lasted approximately 90 minutes each. The participants received a $30 incentive for their participation at the end of the group. Procedures for the focus groups were approved by the university Institutional Review Board.
---
PARTICIPANTS
For this study, we conducted five focus groups. For one of the focus groups all but one participant cancelled a scheduled appointment, turning that session into an individual interview. This individual was HIV-negative. The other four focus groups, two for HIVpositive MSM and two for HIV-negative MSM, each included 8 to 10 participants for a total of 34 participants. Participants were organized by HIV status in order to provide insight into how HIV status influenced testing and engagement in care. All participants were informed during recruitment of the type of group that they would join (i.e., HIV-positive vs. HIVnegative only), and we develop group rules that stressed privacy and confidentiality to build trust and rapport given the potential sensitivity of disclosing one's status.
---
DATA ANALYSIS
All transcripts were transcribed by a member of the research team. We used Grounded Theory and the constant comparative method (Stacks & Salwen, 2014;Strauss & Corbin, 1990) as our main analysis frameworks. The entire research team read each focus group transcript and the individual interview to identify main themes and to create a preliminary coding tree. The coding scheme was developed using an inductive approach (Thomas, 2006). We modified the coding tree to add any new relevant themes and codes before transcript coding began. Two research team members independently coded the transcripts using the finalized coding tree and used QSR International's NVIVO 10 qualitative data analysis software.
We established an a priori coder agreement of 90%, and had multiple coders code one transcript to establish coder calibration and agreement. Coding discrepancies were discussed and reconciled between coders. If coding agreement was less than 90% we would retrain and recalibrate coders in an iterative process until 90% or higher agreement was reached. We achieved greater than 90% agreement on all themes (range from 94 to 100%) from the initial coding and therefore did not need to retrain or recalibrate coders. All focus group and the individual interview data were coded and analyzed using matrix coding queries to determine the most salient themes. Finally, we compared themes between HIV-negative and HIVpositive groups using matrix coding queries to assess whether themes differed between our two main groups.
---
RESULTS
The demographic characteristics of the participants are displayed in Table 1. Of the 34 participants, almost all were Caucasian. HIV-positive participants ranged in age from 28 to 55 years old, whereas HIV-negative participants ranged in age from 18 to 41 years old. Overall, 44% of our participants were HIV-positive and 56% were HIV-negative.
Several themes emerged in response to our social networking technology questions including: meeting new people, demographic differences in use, social media etiquette, disclosure, privacy concerns, and the use of technology in prevention.
---
MEETING NEW PEOPLE
Social networking technologies were the main way by which participants met new people and kept in touch with current members of their social networks. In terms of meeting new people, even though the use of certain sites was meant for sexual encounters, many participants revealed how interactions that were initially sexual in nature turned into friendships. "I made a couple of friends through those hook up sites, people I only met with the intention of hooking up with but turned into a friendship" (HIV-positive participant). Another participant from the same group also acknowledged this unintended consequence: I've used it for both purposes: friends and looking for sex … and the line gets very blurred. Sometimes I just want to chat … and get into these deep conversations and they will go on for a couple of days and then they'll tell me … do you want to f**k? So I kind of just stay away from all that stuff. (HIV-positive participant)
Even though many of the participants showed some contempt towards these platforms, the potential of finding long-lasting friendships through them was often acknowledged.
I have a sort of cyclical or undulating relationship with these gadgets. But the thing is I met some really nice people in the likes of Manhunt and Facebook and some of them turned into great friendships so I think it's important not to discount them. (HIV-negative participant)
The use of social networking technology for friendships and sex was especially useful for some participants when traveling and in areas with inactive gay scenes and infrastructure: "when we moved here it was a big deal because we didn't know people and it was like where do you go, because there's really no bars here …" (HIV-positive participant).
---
DEMOGRAPHIC DIFFERENCES IN USE
Another theme that emerged was differences in social networking technology use based on age, location, and HIV status. Younger participants described their use of new social networking technologies and the trends they see regarding older and more established ones like Facebook: This participant implied that among the individuals who used this app to engage in bareback sex (i.e., anal sex without the use of condoms), the nondisclosure button was an invitation to assume an HIV positive status. Further, this quote emphasized that some social networking sites are used primarily for social purposes (e.g., Facebook, Instagram), whereas others are used primarily for sexual purposes (e.g., BBRT, Dudes Nude). This distinction between social and sexual purposes often led participants to discuss issues related to social media etiquette.
---
SOCIAL MEDIA ETIQUETTE
The idea of media etiquette is a concept that was raised in several groups. This concept describes the ways in which people should behave when using social networking platforms and also how an initial interaction can lead to a transition from one social networking platform to another, which eventually leads to a face-to-face encounter. The way in which people first introduce themselves in social networking technologies was explored in one of the HIV-negative groups:
Everyone does that "oh I'm just looking for friends" and I'm like really? [be]cause you're a headless torso or you're shirtless, showing your body and you're just like "yeah looking for friends" but you won't respond to everyone, you'll just respond to attractive men … so you aren't just looking for friends you're looking for attractive men that you are eventually going to f**k so stop saying you're looking for friends. (HIV-negative participant)
A sexual undertone is commonly assumed when meeting someone in a gay forum or app, even if the person explicitly states he is not looking for a sexual encounter. This creates a problem for participants looking for long term partnerships or friendships: I remember there would be a lot of people who would reply to me when I said "whoa, what are you looking for" and I would say friends or chatting. They would get pissed, and they would say "you know this is for hooking up?" and "what are you doing out here?" (HIV-negative participant)
The participants describe the process of transitioning from one social technology platform to another in one of the HIV-negative groups: "There's a hierarchy. Like you'll start off on Grindr, and then text, and then you get to know them, and then here's my Facebook…I'd like to learn more about you" (HIV-negative participant). Participants also discussed that what people talked with each other about and how they communicated with each other was influenced by the type of social media platform and that certain types of communication was more appropriate for some platforms than for others.
Participant M: That's probably like my biggest pet peeve. When someone comes at me really directly and sexually on any kind of social media. It's like hey here's a picture of my d**k and I'm like cool. I could have found a much better one online … Participant N: Even in text messaging it's like that. It's like awful. Oh my G-d get over yourself.
Participant O: That's why I stopped using Grindr: people start conversations with pictures of their penises.
Participant P: Or guys that have a picture of their abs as their profile picture. (HIVnegative participants)
Most groups agreed that apps and websites which granted a greater degree of anonymity (e.g., Grindr) allowed for people to be more sexual and direct when contacting others. Participants identified a similar direct approach to conversations regarding risk taking behaviors. "I think that conversation escalates a lot quicker as far as … 'Hi how are you; What are you looking for? What are you into?' Then it goes right into condoms or no condoms" (HIV-positive participant).
This also applies to uses of prevention, as one participant summarizes: "… it's a place for people to meet other people; it's not a place for you to be an activist about HIV or gay rights …" (HIV-positive participant). Etiquette and appropriateness of such messages were cited as barriers for possible prevention interventions using social technologies.
---
DISCLOSURE
The topic of HIV status and disclosure when meeting new potential sex partners was a prevalent theme in both the HIV-positive and negative groups. Some participants expressed frustration when sites asked them for their status in their profile: "that bothers me. In those sites where you have to put your status, I put negative because I don't want everybody to know because it's not their business at that point" (HIV-positive participant).
The HIV-positive participants expressed conflict in whether to be forthright by disclosing their status on social networking hook up sites, and the potential consequences of men being uninterested in them: "I mean when this is what you get … and it was hard, people were uninterested, if you disclosed people would not hook up with you" (HIV-positive participant).
Negative participants expect the status question to be part of the social networking technology platform they are using and tend to readily believe what they read: "I have never asked. If you're disease free, it will be on their little blurb [in the phone app] …" (HIVnegative participant). Still, for negative participants the conversation was avoided mainly because it was considered to be something that would turn off a potential sexual partner: I wouldn't ask because it's just awkward. I feel like it would ruin the conversation. If you are trying to just meet someone with the goal of hooking up with them I feel like asking them, being up front with him would be a turn off and they would [look for] other options on Grindr. (HIV-negative participant)
In addition to disclosure of status, some social networking sites encourage the disclosure of one's viral load.
People in those websites now have the option of saying, instead of positive, undetectable…and that bothers me because I know a lot of people that say they are undetectable and … they could be … but I've also spent 3 to 4 days with them and the only drugs that I've seen them put into their bodies are done recreationally … (HIV-positive participant) Several participants stated that this new type of status gives a false sense of security to HIVpositive individuals who in turn transmit this information to potential sexual partners. "I know someone who was going around saying that because he was undetectable, he couldn't pass the virus around … so he wasn't disclosing his status with anyone" (HIV-positive participant). This may represent a strategy to manage potential stigma situations caused from their HIV-positive status.
---
PRIVACY CONCERNS
Privacy of social media postings was a frequently-cited concern for participants: "say you are head of a company and you're hiring and I've been with people that go to Facebook to check the person out. And when you post something, whether you are (HIV) positive or not, it's … you know … I wonder about this person" (HIV-positive participant). This concern for privacy was related to people within their social networks, best expressed in one of the HIVpositive groups:
Earlier in my life … I was very open and honest, I was OK with having that conversation about my status, but I'm in a relationship with someone who tries to be very anonymous and doesn't want his business out there. So I keep my s**t on lockdown … I try not to have my Facebook and Twitter … I mean I won't even do a four square [app] check-in when I'm at the gay and lesbian center because I don't need people knowing why I'm there or asking why. (HIV-positive participant)
The concern for privacy was mentioned by both HIV-negative and HIV-positive participants, but was more prevalent in the HIV-positive groups. HIV-positive participants expressed concerns that posting messages about HIV prevention within social network sites would lead to negative consequences for themselves and their loved ones because of stigma toward HIV and how quickly information can spread within a social network using these platforms.
Participants in both groups often described their social networks as separate or compartmentalized. There was some overlap between their networks, but generally they were kept separate, often along the lines of sexual orientation. For some participants, this compartmentalization was due to perceived lack of empathy, specifically from heterosexual members of different social networks, such as work and/or school. For HIV-positive participants, this compartmentalization also included the fear of discrimination and stigma which influenced what they put out there on social technologies. I used to have a blog, right after I first got sober and it was just about like trying to date and dealing with my status … now my partner works in the medical field and … there are restrictions about going into [medicine] if you're already positive … so that creates a problem. And he doesn't want his family all over the place knowing that I have a blog. (HIV-positive participant) This hyper vigilance in social networking technologies regarding sexual orientation and HIV messages is a limiting factor affecting the dissemination of prevention messages within important MSM networks.
---
USE OF TECHNOLOGY IN PREVENTION
The link between technology use and prevention strategies was carefully explored during the group discussions, and several possible prevention strategies using technology emerged. First, participants stressed that current prevention messaging through popular gay websites do not have the far-reaching effect they intend. One HIV-negative participant commented on a mass email about HIV testing sent by one of these websites: "I'll be honest I've never read that email and I'm pretty sure that unless you've had a recent risky behavior you probably have not looked at those emails. They are easy to delete" (HIV-negative participant). Participants felt similarly towards group messages sent via mobile apps, describing them as poorly orchestrated. Many participants stated that an individualized message within a given social network would be the most effective way to change behaviors within that network. The more personal the message the better:
Texting is more effective than e-mailing. Calling is more effective than texting. The only way that we really get through to people is by talking with them face to face. I think this is generally applicable and that by far the most effective way would be to talk to somebody. (HIV-negative participant)
Participants understood that personal messages had a more profound effect than any behavior campaign out there:
One of my friends just recently became HIV-positive, and he's a close friend of mine … and I'm like, oh crap, it can happen to anyone. So that's when [it] lit a fire under my ass. Be more safe go get tested. It has to hit you at home, because if it doesn't no one is going to care. These little 17-18 year olds that are going to a club … they're not going to care. (HIV-negative participant) Several other participants related similar ideas, expressing the power of message communication within their social networks as greater influencers in shaping their behaviors. However, many participants showed hesitation to reach out to others directly because of, as previously explored, privacy concerns and social network compartmentalization. For example, this is what a participant said about making his Facebook posts private: For me personally, I'm too lazy to do that. I'm just going to be blunt. I'd rather go through my phone list and message those [rather] than create a group. That's my own personal preference. If someone put me in one of those groups and I felt comfortable with everybody in there I would, but when I have different people from various walks of life, family and what not, I wouldn't feel comfortable doing that piece of my personal life. (HIV-positive participant)
The overlap of social networks and possible breach of privacy through technology appeared daunting for some participants. However, participants suggested that masking prevention messages would allow participants to maintain privacy while still receiving important information. Participants suggested embedding prevention messages in something that would not only be more appealing but also more attractive to the intended audience:
If you were to do something that is masked … that somehow can [engage others in] the education of different topics, like transmission, and risk activities, and mask it like some kind of survey where like … "oh your friend took this survey and failed, why don't you see if you pass it" … and it can kind of go viral. (HIV-positive participant)
Using this subtle engagement strategy would open the doors for people to more widely share HIV-prevention content within their networks. Participants also mentioned the need for wide broadcasting of the message through multiple types of media. Saturation and variety would increase the likelihood of reaching the target audience and increasing the message's impact on behavior. Participants also felt messages should come from both external sources such as clinicians and prevention experts as well as from within social networks through peer delivered-messages.
---
DISCUSSION
Consistent with the literature, our results suggest that MSM are frequent users of social networking technologies, using them to maintain current relationships and to meet new people, including for friendships and sex (Grov & Crow, 2012). Use was particularly prevalent and important for young MSM. Young people's preference for newer technologies has been previously described in the literature (Bachmann, Kaufhold, Lewis, & Gil de Zúñiga, 2010;Delli Carpini, 2000) and aligns well with our study that highlighted that younger MSM are early adopters and frequent users of technologies. Several important themes emerged which provided insight into how MSM use social technologies, as well as the best strategies for using social technologies in HIV prevention.
An important theme that came out of discussions with our participants was social compartmentalization, which has been previously observed in the literature (Rosenmann & Safir, 2007). Compartmentalization suggests that certain behavior and conversations over social media depended on the target audience and perceived privacy of the technology. Participants were reluctant to discuss HIV or gay-related themes on social media sites, particularly ones with large groups of heterosexuals (e.g., friends, family, co-workers), for concerns of privacy and fear of potential social consequences for themselves and romantic partners accruing from stigma. This compartmentalization due to issues of privacy may limit the usefulness of mainstream social networking sites for HIV prevention where MSM may not feel comfortable to discuss their sexual orientation or topics that may make people think they have HIV.
Participants expressed discomfort discussing HIV prevention topics using the public functions (e.g., posting) of general social networking sites like Twitter and Facebook, and were more comfortable using technologies that allowed for more intimate and private conversations, such as small online groups of friends, mobile apps with greater privacy settings, or even more preferably one-on-one text or instant messaging. This is consistent with a recent study of young heterosexual men and women that showed that people did not feel comfortable posting HIV and STI prevention messages on relatively public social networking sites like Facebook, and preferred face to face communication of HIV/STI prevention messages or more private technologies like one-on-one texting/messaging (Divecha, Divney, Ickovics, & Kershaw, 2012). This suggests that a deep concern for privacy is a primary obstacle for any intervention employing social networking technologies. The issue of privacy with online interventions has been raised in several studies, each providing creative solutions or strong precautions to protect privacy while utilizing those technologies (Pachankis, Lelutiu-Weinberger, Golub, & Parsons, 2013;Pedrana et al., 2013;Young, Cumberland, et al., 2013). Protection of confidentiality was recognized as the single biggest obstacle against prevention efforts within social networking technologies. Without assurances about anonymity, most of our participants expressed great reluctance in transmitting messages related to HIV awareness and prevention. These results suggest the need to create private options within social networking sites or social networking technologies that allow for people to engage in more intimate and sensitive conversations without fear of violations of their confidentiality or privacy. Technologies like Snapchat, Slingshot, and Crumble Messenger, messaging apps that allow one to post messages and pictures with the premise that they will permanently disappear, may be a step in the right direction for providing MSM with more privacy in posting and receiving prevention messages without threat of breaking confidentiality.
In addition to the compartmentalization found in general social networking sites like Facebook and Twitter, we uncovered compartmentalization on gay websites and apps in terms of content and behavior. Many of the gay themed apps like Grindr and Manhunt were perceived by many of the users as primarily sexual. Attempts to develop platonic friendships on these were often perceived as unwelcome or unsuccessful. Further, discussions of prevention on many of the gay-themed apps were often perceived as out of place. Participants were quick to note the poor reception to mass produced messages found in many gay-themed sites. This extended to even disclosure issues. Disclosure of HIV status in social networking platforms is precluded by fear and stigma associated with the disease (Derlega, Winstead, Greene, Serovich, & Elwood, 2004), a theme which also emerged in the present interviews. Participants noted that there are only a few gay apps that allow for HIV status disclosure in online profiles. Mostly, participants avoided the disclosure conversation altogether, which is consistent with studies showing high rates of nondisclosure to new sexual partners by HIV-positive individuals (Parsons et al., 2005). Lack of etiquette of disclosure on social networking sites may facilitate HIV transmission and risk. Increasing social norms around disclosure on social networking sites, particularly ones used for meeting sex partners is an important avenue for future prevention interventions.
A few limitations should be noted from our study. First, our sample was primarily White, and therefore the application of these findings to largely minority MSM populations still needs to be explored. Second, the use of focus groups to discuss sensitive topics such as sexual behavior and HIV may have limited some people to speak up compared to individual interviews, particularly given the concern our participants had about privacy. However, focus groups can also facilitate conversations and add richness that would not be obtained with individual interviews.
---
CONSIDERATIONS FOR FUTURE INTERVENTIONS
Our results suggest that broad HIV prevention approaches, such as mass texts or the creation of prevention Facebook pages or Twitter accounts, are unlikely to be successful implying that the field needs to devise strategies that allow MSM to maintain network compartmentalization and address concerns for privacy while receiving HIV prevention interventions using social networking technologies. This is consistent with results from trials using social networking sites like Facebook, which have faced problems with retention and decreased efficacy longitudinally (Bull et al., 2012;Young, Szekeres, & Coates, 2013). Techniques that embed prevention messages in clever ways may be more effective, such as integrating HIV prevention and testing messages in social networking games (Hieftje, Edelman, Camenga, & Fiellin, 2013), quizzes (e.g., Which Game of Thrones character are you?), or larger themes of health and wellness of MSM. Further, our results suggest that participants are not opposed to engaging their peers and friends about HIV prevention, but prefer to choose the best social media platforms to have these discussions based on who they are communicating with and the nature of the message. This suggests that strategies such as diffusion of innovation, which suggests intervening with key members of social networks who will subsequently spread the messages to relevant members of their social networks, might be an appropriate way to spread HIV prevention messages throughout MSM populations (Rogers, 2010;Stacks & Salwen, 2014). Diffusion of innovation employs early adopters with wide access to a given social network in disseminating messages about HIV testing and prevention, and allows for spread using oneon-one conversations within social network technologies. Focusing on key individuals within networks and providing them training in HIV prevention and how to tailor messaging for different social technologies may be an effective strategy that provides MSM with agency and choice when intervening with members of their own social network.
---
Author Manuscript
Ramallo et al.
Page 16 |
Based on cultural heritage plays about throwing action, we made a public interactive artwork by throwing pseudo-balls, "deBallution." Audience members participated in interactive artwork not only for pleasure but also as part of their cultural heritage, maintaining and also disrupting social orders and structures. First of all, this research extracted the audience's basic activities from cultural archetypes. Then, it applied audience activities to a basic model of public interactive artwork for playing on a media façade to participating in collective performance for disruptive social structures. The interactive artwork concept is to catch audience members' throwing movements on a virtual screen and drawing various generated kaleidoscope images to predict points from the audience throwing on the screen. We made prototype "deBallution" and then exhibited it and evaluated user tests. Through evaluation results for the prototype, we revised "deBallution" artwork contents for developing artistic values and produced overall interactive artwork. |
1 Introduction
---
Background and Motivation
The audience member has often been just a spectator in public art, not a participant or creator [1][2][3]. Digital technologies allow audience members to help build a city's scenery by their own action [4,5]. It is possible to change the city's landscape using audience members' activities, mediated by digital technologies through interaction. This public experience put audience members in cooperation and competition to change a city's landscape. Digital art is avant-garde, because it makes use of digital media, which prompt interactive, participatory art, which-for its part-prompts participatory democratic society [6].
Values of public artwork make public participation an "unforgettable experience," not just a private experience of artwork. These experiences could lead to direct audience action mediated by digital technologies. In the viewpoint of the audience participation, Claire Bishop proposed "participatory aesthetics." These aesthetics were different from Bourriaud's "Relationship Aesthetics" [7,8]. According to Bishop, "The artist's practice, and his behavior as producer, determines the relationship that will be struck up with his work. In other words, what he produces, first and foremost, is relations between people and the world, by way of aesthetic objects." The work of art has a social and historical context, but its role is not to engage directly with society; art is disengaged, and it has its own space. Bishop asserted that the Relationship Aesthetics concept is the ideal form of audience collaboration and cooperation. Through the public interactive artwork for media façade by audience interaction, audience members actively participate in and change contents producing a landscape of the city.
The main motivation of this paper is to produce a digital interactive artwork, building on a cultural archetype. Cultural heritage directly supports public interactive artworks in audience action and in the embodiment of contents. This audience action not only involves body movements from passive observers and performers for choosing the scene; it also involves generating energy for changing social views of politics. This aesthetic values public artwork based on cultural heritage. Through the archetypes, it was possible to extract original emotion and activity from the human universal model and derive artwork's contents-narrative, visualization, sonification, and embodiment of artwork's objective [9,10]. Audience members participated in the public artwork and saw their shadows changing due to other audience's action on video. Cultural heritage have seen use in digital games, especially narrative ones, to extract a main character's action for their features and graphic images. Cultural heritage can also support artwork to enhance the aesthetics' values through the audience's universal activity patterns. This is because audience members have situations with mythical or traditional experiences and return to the origin model of humanity through the culture heritage.
---
Related Work
Public digital artwork has been produced in various ways. These public digital artworks gave new artistic values for collaborated or competed experience to participating audience. Lozano-Hemmer Rafael has made a public interactive artwork installation based on digital technologies, "The city as interface." Lozano-Hemmer showed that an alternative interface design is possible which stimulates brief encounters as part of everyday urban life [11]. Emily et al. proposed "The VideoMob," an interactive video platform and artwork that enables strangers visiting different installation locations to interact across time and space through a computer interface that detects their presence, video-records their actions while automatically removing the video background through computer vision, and co-situates visitors as part of the same digital environment [12]. Beyer et al. proposed "The Puppeteer Display," a wide interactive banner display installed at a city sidewalk, and two long-term field studies investigated the opportunities of public displays to actively shape the audience [13]. However these artworks or research works have not used audience archetypes to make new artwork contents. Audience members just had an experience of real life, with much the same patterns. Audience members did not know various meanings of their own actions and so duplicated their usual actions. This was because this artwork and research did not consider human psychology and cognition-in view of objective and result from their own actions. These audience actions effected on temporarily and not expanded public experience for making new society-new rules, communities, and role of humans. Applied archetypes will create new, expanded experiences, making various layers generate audience action-beyond space and time, age and gender.
2 Artwork Overall Design
---
Artwork Concept
The basic artwork concept is to make new artwork form audience members' whole-body action. The audience action will influence social values, mediated by public media.
The meanings of the title "deBallution" are as follows.
First, it means a digital revolution by throwing balls, a symbolic revolution mediated by digital artwork, a change from tradition to digital technologies. Second, it means devolution by throwing balls. Devolution is the transfer of some authority or power from a central organization or government to smaller organizations or government departments. The audience here performs symbolic devolution. Audience members threw the pseudo-balls for media façade and caused symbolic digital revolution, devolution [14].
This paper chose the throwing action to make audience resistance and destruction activities for competition and antagonism, generating a new world-not passive media or society. Why did we focus on the throwing action in the artwork? The throwing action is related to disruptive aesthetics. Concerning disruptive aesthetics, our motive was "overthrowing a society." In summation, the term "overthrowing" means "beyond throwing," which regards the accomplishment of objective through the throwing activities [15]. Overhand throwing is a basic throwing action used in war, hunting, and sports. It is a direct, fast, and accurate throw by moving the hand over the shoulder. This throw is a symbol for a strong motivation to hit a target and change a target condition.
This artwork referred three throwing games that are part of different cultural heritages.
(1) Greek "Hyakintos" Myth Throwing discus myth content is about the origin of the flower "Hyakintos" from a relationship between a god and ordinary people in terms of friendship, love, and jealousy. However, this myth told a story about the origin of throwing action sports, which in Greece involved the discus. Unlike other sports games, throwing discus is not war game. It is a pure competition game for records [16] (Fig. 1).
① Main objective -Throwing a discus for a long distance single-handedly. ② Activities -Throwing discus with three-quarters movement, according to the rules. ③ Values -A thrown discus will come back just like a boomerang, as will friendship and create entertainment in a group.
(
---
2) Stone War from Korean Traditional Play
A war of throwing stones is a traditional Korean game [17]. Two communities separated and began throwing stones each other. This game came from real war but had been developed as a traditional folk game in the festival. The game enhanced a group relationship through competition in each community (Fig. 2).
① Main objective -Competing and winning by throwing stones for communities. ② Activities -Throwing real stones and avoiding or defending against stones from opponents ③ Values -Establishing cooperation within the community and competition with opponent communities. The ultimate value was to strengthen both for future real battles.
(3) Battle of the Oranges from Ivrea in Italy
The Battle of the Oranges is a festival at Ivrea in Italy. It involves some thousands of towns people, divided into nine combat on-the-ground teams, who throw oranges at tens of card-based teams-with considerable violence-during the last three carnival "deBallution" -A Prototype of Interactive Artwork days [18]. People wearing a red hat will not be considered part of the revolutionaries, and therefore will not have oranges thrown at them. These traditional games were based on participators throwing. The participators enjoyed the game like they were playing at war and felt the pleasure of rebellion and victory (Fig. 3). ① Main objective -Throwing real oranges to opponents and win the official guards as traditional carnival. ② Activities -Throwing real oranges and avoid flying oranges ③ Values -Revolution for ordinary people in carnival game and visualization pleasure by crushed oranges.
These archetypes have features for throwing action by individual or group and are beyond making festival play, making a new world.
---
Scenario Design
We created "deBallution's" scenario design based on the previous artwork concept and applied by narrative forms [19].
(1) Audience members watched video contents about city's landscape on a media façade or a large display. (2) Audience members threw pseudo-balls onto the media façade or the large display.
(3) Video contents in the media façade or the large display broke or generated pseudo-balls. (4) Audience members disrupted the video content when they filled the media façade or the large display with broken or generative images. (5) The new content that played in the media façade or the large display symbolized a new world.
---
Graphic Design
The main concept of graphic design is to visualize audience throwing action to generate new images on the throwing point. The contents express visually that participants desire to overthrow reality by throwing. The first screen images are realistic and fanciless, reflecting everyday life. After participants throw, a festival starts on the screen, but the screen begins to be damaged. At any point where a participant throws, a kaleidoscope image expands just like images of crushed oranges. The throwing action has usually happened at festival in the past. The throwing action in this project means that participants gather and they set off firecrackers by making a festival. The kaleidoscope images are like firecrackers and reminiscent of festivals. The many points of the action signify the many participants. Ernest Edmund used kaleidoscope images in basic interactive research to generate various pattern images from audience action [20,21]. These repeated throwing actions of participants creates recurring but diverse patterns of firecrackers. These lead participants into a rhythmic fireworks festival. The various images of firecrackers depend on the motion of the participants. This is intended to emphasize the diversity of individuals. After festival ends soon, strange images like errors appear on the screen. These little errors lead to big changes and damages. Finally, participants overthrow the screen image. The glitch effect is used for the damage effect, as it is similar to the principle of a glitch. Unintended simple errors by participant generate the new screen. This gives a positive meaning to errors, failure, and the participants' ultimate conquest of the screen (Fig. 4).
---
Applied by Aesthetics for Artwork
Audience members could put their emotions in the artwork and change their emotions through participation in the artwork. This artwork is based on technical implementation and graphic design. However, these technical factors of the interactive installation would not lead to interactive artwork without aesthetic values. This project did not serve designs or technical devices, but two aesthetic values-pleasure framework and disruptive aesthetics.
(1) Pleasure Framework
The concept of pleasure framework proposes thirteen pleasures-creation, exploration, discovery, difficulty, competition, danger, captivation, sensation, sympathy, simulation, fantasy, camaraderie, and subversion by participation in an interactive artwork. These are only possible categories that a participant might feel pleasure in during an interactive art experience [22]. Audience members could experience the following emotions, typically by participating in "deBallution" through a pleasure framework.
① Creation -Audience members felt they were part of the creation when they drew a new painting by making a circle, making a new world through their own actions. ② Discovery -Audience members discovered new unfamiliar scenery of the city.
Particular actions may provoke different images and transformed contents. ③ Difficulty -Audience members had difficulty making circles on the screen precisely where they wanted to draw them. This difficulty gamified the experience, focusing them on achieving a goal. ④ Competition -Audience members participated in "deBallution," in collaboration. They tried to achieve a defined goal together. Completing the goal Fig. 4. Examples of graphic design [14] could involve working with or against another human participant when making a new world. ⑤ Subversion -Audience members could destroy the background image on the screen and create the new world through their own action.
(2) Disruptive Aesthetics
Artworks each have their own artistic values. Disruptive aesthetics placed artistic value on social meanings [23]. Especially, they lead audience members to break down traditional social values. The audience overthrows the social order and proposes a new world (Fig. 5).
① Audience -Audience threw pseudo-balls.
② Interactive installation -The installation represented a screen video and covered circle images, influenced by audience action. ③ Artwork contents -The previous background video broke down after the audience filled it with generative circle images; a new, futuristic video ensued. ④ Breaking a rule/community/role -Audience members disrupted the city landscape in the screen by their own activities. Such disruptive aesthetics in "deBallution" broke down the rule of maintaining community and created a new world.
These aesthetics will influence the artwork, creating a new artistic value and experience independent of design values or technical implementation issues.
---
Prototype and Evaluation
---
Prototype Implementation
The prototype focused on audience throwing action and reflecting on the screen circle images. "deBallution" -A Prototype of Interactive Artwork These audience activities could be possible to two or three participants. Those images filled the screen and disrupted background images. This distorted content is a new world the audience itself created by changing city scenery.
The prototype made by openFrameworks, used C++ programming connected by Kinect. This is because, in the prototype test, video about scenery of the city played on a small screen instead of a media façade and focused on audience activities not on video contents. Audience members made throwing gestures in front of the screen, which generated circle images at the positions of the audience's throwing by calculating virtual location through Kinect, connected by openFrameworks programming. Audience members continued throwing and circle images at the position of the previous images and new images expanded on the screen progressively until the screen filled with images covering up the scenery of the city. At that time, the screen was full of various circle images. The origin video was a reversal of the screen and the audience's throwing play was completed.
Video cuts from prototype exhibition are as follows (Fig. 6).
---
Evaluation Factors
After the prototype exhibition, we evaluated the user test and group interview. Ten participants (5 males, 5 females) performed in the prototype test. Their ages ranged from 22 to 38 years old (x¯= 28.2); eight were right-handed, and 2 were left-handed. Participants could throw pseudo balls as much as they wanted without a restriction on time or throwing numbers. After a survey, we interviewed the participants. In the participant interviews, we focused on two questions involving social group play experimentation and development ideation for increasing the artwork value.
---
Results and Discussion
(1) Main objective
The highest factor concerning the main objective of the participants was making circles in the content (Fig. 7).
This means that participants wanted to know the results of their own activities, and just the throwing action would be possible to influence content. Participants were interested in the throwing action for making circles and developing next stages. The participants' objectives influenced content development because they wanted to change the media through interaction. This objective is different in regards to observation or appreciation from Bishop's viewpoint of participatory aesthetics. However, participants had different objectives and desires. In the developing artwork, we focused on participant activities of adjusting different images by throwing actions, reflecting their own desire and creating antagonism and competition between the participant groups. The average number of throwing actions was 37. Continued throwing activities by the audience meant that the participants were immersed in the "deBallution", because the test did not ask how many throwing activities were performed. Participants performed voluntary throwing actions in the prototype test. This was because participants wanted to watch the "deBallution" content of their own actions and were interested in participation by making circles. Then, participants acted on the artwork with various throwing actions and poses, just like game play. For example, participants jumped for power-up throwing, shot-put throwing, and throwing with both hands. The original throw for this artwork was the overhand throw, which means "overthrowing the rule, community, and role," by participants; however, participants made various throwing actions: side-arm, underhand, three-quarters, twisted throwing, and jump throwing. Trials of a throwing action with a whole body movement especially influenced audience emotion through participation. This result means that the audience wanted to act on various self-objectives and were not controlled by installation limitations. This is because the "deBallution" installation fit the audience overhand throwing action; however, it could be perceived with other throwing actions as well. Participants preferred group play over single play. This means that participants wanted to play in collaboration, competition, or conflict for self-motivation.
(3) Values Participants had various artistic values in the prototype of "deBallution."
The highest factor of the pleasure framework was the creation pleasure. Subversion and simulation were the second highest factors in this framework (Figs. 8 and9). The highest factor of disruptive aesthetics was the overthrowing rules. The highest factor of content developing the artwork was the circle image changing to other images.
Narrative development, background image modifying, and competition among participants were the highest factors in the developing elements. In the interview, participants proposed various background images and videos. The participants also wanted to perform by stage level, developing like storytelling or gamification. In general, participants' interests and immersion were increased by group performance. This is because competition influenced participants' throwing objectives and enhanced their throwing action skills. In short, participants wanted to experience dramatic visualization by their own activities and changing of artwork values. These elements were our intended objective.
We found the pleasure framework and disruptive aesthetics in the artwork. The interactive artwork outcome generated intended or unintended audience participation [24]. Participants were absorbed with the "deBallution" merely by the throwing activities. Participants drew their own images of their own throwing actions. They wanted to draw circle images where they drew-just like the abstract drawings of Kandinsky or Jackson Pollock. Participants wanted to draw the same size and location image as the previous throwing points due to their reflection desire. This means that they wanted to draw an in-depth layered visualization of the circle image and background. Participants had a duplication desire for their own creatures and developed the next stages. These participant-desired actions will adjust the realization of public interactive artwork "deBallution." They will generate more active unintended actions and create a new image relation between the background image and the participants generating images. These artwork values were values of participatory artwork by Claire Bishop, influencing a new world by direct participant activities [7]. These results mean that the throwing action autonomously influenced various participant activities. Using these activities, it was possible to make developing contents and artistic values. In the prototype test, audience members threw imaginary objects and generated extended circle images. Those images filled the screen and disrupted background images. This distorted content is a new world the audience itself created by changing city scenery. The original objective for the "deBallution" was seeing a display for a media façade or a large display. Through the discussion, we revised the contents with the prototype test results. The follow table is a summary of the changing contents in the artwork (Table 1).
For increased audience participation, we applied these audience action patterns in the prototype to generate random kaleidoscope images.
---
Realization "deBallution"
The system was designed to recognize the users' throwing actions and patterns. The 6 main parameters included the elbow's x, y, and z positions and the hand's x, y, and z positions. These were observed and analyzed to make a decision as to whether or not throwing actions happened. The patterns of the audience's throwing experiences were used to describe interactivity in levels (low, medium, high). The "high" level of interactivity concerns a meaningful interaction between the system and the participants. The audiences become active authors or creators. The diversity of interactivity levels comes in different shapes, sizes, and colors of 3D generative kaleidoscopes. For instance, high-level interactions make bigger sizes, dynamic shape changes in animation, and vibrant shades of red. Based on color theory, this is associated with different meanings: energy, strength, power, and celebration. "Medium" levels of interaction are possible to make a middle range of sizes, animation, and comfort shades of green that create feelings of relaxation, balance, and soothing emotions. "Low" levels of interaction create small sizes, changes, and sophisticated shades of blue that are associated with emotions of calmness, spirituality, and futurism [14] (Fig. 10).
---
Conclusion
In this paper, we proposed the basic prototype of interactive artwork "deBallution" based on cultural heritages. Audience members threw pseudo balls on a screen and could make a new world by their own activities in the interactive artwork based on cultural heritage. |
Background: If smoking is common within a pregnant woman's social circle, she is more likely to smoke and her chances of succeeding in quitting smoking are reduced. It is therefore important to encourage smoking cessation in a pregnant woman's social circle. Midwives are ideally positioned to help pregnant women and members of their social circle quit smoking but there is currently little knowledge about if and how midwives approach smoking cessation with pregnant women's social circles.In 2017 and 2018, semi-structured interviews were conducted with 14 birth care providers in the Netherlands. Interviews were inductively coded; data were analyzed thematically.In the interviews, midwives reported that they don't commonly provide smoking cessation support to members of pregnant women's social circles. The respondents noted that they primarily focused on mothers and weren't always convinced that advising the partners, family, and friends of pregnant women to quit smoking was their responsibility. Data from the interviews revealed that barriers to giving advice to the social circle included a lack of a trusting relationship with the social circle, concerns about raising the topic and giving unwanted advice on cessation to members of the social circle and a lack of opportunity to discuss smoking.Midwives in the Netherlands were reluctant to actively provide smoking cessation advice to the social circle of pregnant women. To overcome barriers to addressing cessation to the social circle, educational programs or new modules for existing programs could be used to improve skills related to discussing smoking. Clear guidelines and protocols on the role of midwives in providing cessation support to the social circle could help midwives overcome ambivalence that they might have. |
weight [3,4]. Infants exposed to tobacco smoke are at greater risk for sudden infant death syndrome [1], childhood obesity [6,7], and ear infections and upper respiratory infections [8]. These risks can be reduced by helping pregnant women quit smoking and helping them reduce exposure to tobacco smoke [9].
Pregnancy is an opportunity for pregnant women and people in their social circle to change their health behavior, such as quitting smoking [10,11]. However, not all women quit smoking during their pregnancy. In Europe, the estimated prevalence of smoking during pregnancy is around 8% [12]; 30% of women who smoked before pregnancy continued to smoke daily during pregnancy [12]. In 2018, 7% of pregnant women in the Netherlands smoked at some point during pregnancy; 23% of the women who smoked before pregnancy continued smoking during the entire pregnancy [13].
A pregnant woman's partner, friends, and family (her 'social circle') were found to be a major determinant for maternal smoking cessation during pregnancy [14]. Research shows that a pregnant woman's social circle directly influences her smoking through the support they give and by changing their own smoking behavior [15][16][17][18]. However, if a pregnant woman lives with people who smoke and if smoking is common within her social circle, her chances of succeeding in quitting smoking have been shown to be reduced [19]. In addition, exposure to second and third hand smoke is associated with adverse health effects for mothers and children [4,20,21]. Healthcare providers can be effective at assisting members of a pregnant woman's social circle in quitting or encouraging them not to smoke near a pregnant woman [22].
Due to the harms and risks to maternal and fetal health associated with tobacco use and tobacco smoke exposure [4,20,21], it is important that obstetrician-gynecologists (OB-GYNs) and midwives (hereafter birth care providers) address tobacco use and tobacco smoke exposure with the pregnant women's social circle. Pregnancy can serve as a teachable moment for smoking cessation for a pregnant woman's social circle, particularly partners, and an opportunity for birth care providers to help the pregnant woman and members of her social circle to quit smoking [23][24][25]. Engaging the social circle has the potential to increase the effectiveness of smoking cessation programs for pregnant women [8] and can improve the public health impact of these programs by increasing the number of people who quit smoking. However, the members of pregnant women's social circle often receive little to no smoking cessation advice or assistance from birth care providers [26].
In the Netherlands, midwives are the primary birth care providers for the majority of pregnant women. In accordance with the Netherlands Healthcare Inspectorate regulations, midwives should counsel pregnant women and their partners about quitting smoking using the V-MIS protocol (Minimale Interventiestrategie Stoppen met Roken voor de Verloskundigenpraktijk -Minimal Intervention Smoking Cessation Strategy for Midwifery Practices) [27,28]. The protocol consists of asking pregnant women and their partners about smoking, working to increase motivation to quit, addressing barriers to cessation, setting a quit date, discussing cessation tools and techniques, offering help after the quit date, and working to prevent relapse [27]. Research on the use of the protocol also showed that 81% of midwives almost always learn the smoking status of pregnant women's partners, yet midwives lacked skills in motivational interviewing [29] and didn't follow all of the steps of the V-MIS protocol [30]. Further research is needed on the provision of smoking cessation support by birth care providers to the social circle of pregnant women.
---
Aims
The aim of this study is to explore, through interviews, experiences of birth care providers in the Netherlands with providing smoking cessation support to members of pregnant women's social circle.
The results of this study provide insight into the ways in which birth care providers assist members of pregnant women's social circle with smoking cessation, as well as into the barriers that birth care providers have faced when working with a pregnant woman's social circle. Data from this study can be used by health policy developers, birth care educators, and birth care providers to further develop, improve, and implement smoking cessation care guidelines for midwives, especially guidelines in the Netherlands or in healthcare systems with a similar structure of midwife-delivered care for pregnant women.
---
Methods
---
Birth care in the Netherlands
In the Netherlands, midwives are trained in prenatal care, birth care, and postnatal care through a 4 year direct-entry Bachelor of Science in Midwifery program [31]. After completing their education, midwives register as healthcare professionals with the Netherlands Ministry of Health, Welfare and Sport. In 2016, 3221 midwives were registered healthcare professionals in the Netherlands, with the majority working in primary care midwifery practices [32].
Midwifery care for pregnant women begins in the eighth week of pregnancy and continues until a few weeks after birth [31]. Most pregnancies in the Netherlands are, at least at first, cared for by primary care midwives who practice in independent midwifery practices.
If complications arise during pregnancy, midwifes refer women to hospital-based care, where care is provided by an OB-GYN or secondary care midwife for as long as necessary.
---
Design of data collection
In order to understand birth care providers' experiences with providing smoking cessation support to the social circle of pregnant women, semi-structured interviews were conducted with fourteen birth care providers in the Netherlands. The interviews were conducted using an interview guide. The interview guide included an introduction to the interview, a description of the goals of the interview, and questions about birth care providers' experiences with addressing tobacco use and exposure with the social circle of pregnant women, barriers in discussing cessation with the social circle, and the role of birth care providers in cessation support to the social circle. An English version of the interview guide is provided in Additional File 1.
---
Ethical approval
The research was approved by the Trimbos Ethics Committee in October 2017 (2362208) and was carried out in accordance with the 1964 Helsinki Declaration and its later amendments. The data presented in this article comes solely from the interviews with birth care providers and does not include any data from patients or any identifiable data about patients or members of patients' social circles.
Prior to interviews, all participants signed a consent form stating that they were informed that participation was voluntary, that they could withdraw at any time, that they were willing for the interview to be recorded, and that the data would be analyzed anonymously. No incentives for participation in the interviews were offered.
The audio files and transcripts were saved on a secured drive and were only accessible to those analyzing the data (EW and LS). All data from the interviews has been presented anonymously.
---
Participant selection and recruitment
The study team aimed to conduct at least twelve interviews with birth care providers; this number is in line with guidance on interviews and saturation, which notes that 10-12 interviews are often sufficient for reaching data saturation [33,34]. The interviews and the data analysis were conducted simultaneously. We reached saturation at interview 11. However, two additional interviews were conducted to ensure no new information emerged.
A mix of purposeful sampling and convenience sampling was used to recruit birth care providers for interviews. First purposeful sampling was used to engage with birth care providers working often with people who smoke. Birth care providers were recruited from four large cities in the Netherlands which have relatively high smoking rates (Rotterdam, the Hague, Utrecht, and Arnhem) [35]. The research team used public data to select eligible midwifery practices within or near socially disadvantaged city districts [36]. A list of eligible midwifery practices (N = 28) was made; these practices were contacted by telephone to invite one birth care provider employed in the practice to participate in an interview. Potential participants were invited to be interviewed at a time and location of their choice.
Of the 28 eligible midwifery practices, 13 practices did not respond to our request. Of the 15 practices that did respond, 8 practices declined participation, as they had other priorities (n = 6), were participating in other studies (n = 1), or had a lack of experience with smokers (n = 1). Seven practices from the 28 eligible midwifery practices responded positively to the request to have a birth care provider participate in an interview.
Simultaneously, convenience sampling was used to recruit birth care providers to participate in interviews. The study team invited 7 birth care providers with whom they had had previous contact, because they participated in an exploratory research for a smoking cessation intervention for pregnant women [37]. In this exploratory research, birth care providers shared their experiences on smoking cessation care for pregnant women in focus groups [results are not published]. The study team only invited birth care providers to participate in this study, who were not involved in the intervention study. The birth care providers were contacted by telephone or by email. All 7 birth care providers agreed to participate and were interviewed at a time and location of their choice. These birth care providers were located in the Hague (n = 2), Utrecht, Zeist, Gouda, Leiden and Zwolle.
---
Data collection and setting
Thirteen interviews were conducted with 14 birth care providers. The interviews were conducted by two female researchers, working in the field of smoking cessation and trained in qualitative research methods (EW and LS). Face-to-face interviews were conducted between November 2017 and February 2018. The interviews took place at midwifery practices (n = 12 interviews with 13 birth care providers) or at the participant's home (n = 1). One interview was conducted with two midwives simultaneously, both working at different midwifery practices. All interviews were conducted in Dutch. The interviews took 30-60 minutes (average: 40 minutes) and were recorded. The interviews were transcribed verbatim.
---
Coding
All interviews were coded [38] using MAXQDA18. EW and LS familiarized themselves with the data by reading the transcripts. An inductive coding approach was applied where data-driven codes on the influence of the social circle on pregnant women and the barriers birth care providers perceive in discussing smoking cessation with the social circle of pregnant women were generated. After coding five interviews individually, EW and LS discussed their findings and developed a preliminary list of codes. After coding the following four interviews individually, EW and LS reviewed the codes and discussed the definitions of codes to determine agreement. The final list of codes was developed; EW went back to the previously coded interviews to include the new codes. EW coded the remaining four interviews with the final list of codes (see Additional File 2).
---
Data analysis
Data were analyzed according to the principles of thematic analysis by Braun and Clarke, who define a theme as: "A theme captures something important about the data in relation to the research question, and represents some level of patterned response or meaning within the data set" [39]. After data familiarization and agreement between EW and LS on the initial coding process, codes were grouped in preliminary themes by EW. Comparison between coded data was used to identify new patterns from qualitative data and ensuring that the themes reflected these patterns. EW then summarized these preliminary themes into paragraphs for a Dutch-language report [40]. LS reviewed the preliminary themes for the Dutch-language report. Through discussion between EW and LS consensus was reached on the definitions and names of the themes. The paper was produced including only the themes related to the aim of this paper; see Additional File 3 for more information on the themes.
---
Translation
Quotes were translated from Dutch to English by EW. A native English speaker (BJHW) assisted with translation of a selection of quotes.
---
Results
---
Participant details
Fourteen birth care providers were interviewed: twelve primary care midwives, one primary and secondary care midwife, and one OB-GYN. The birth care providers were all women and had at least 1 year experience working in prenatal care. The results were primarily derived from interviews with midwives; the term "midwives" will be used hereafter unless the data was from the interview with the OB-GYN.
---
Results of the analysis
Three main themes were identified during data analysis. These themes were: (1) midwives' experiences with assisting members of the social circle with cessation, (2) perceived barriers to discussing cessation with members of the social circle, and (3) midwives' role in assisting the social circle with cessation.
---
Experiences with assisting members of the social circle with cessation
Midwives believed that a pregnant women's social circle has a considerable influence on her smoking status:
"They [the social circle] play an important role. What you often see is that if women smoke, their partners and their mothers smoke too. You rarely come across a woman who is the only one who smokes in a smoke-free social circle. " (Midwife 1)
When it concerns the social circle of pregnant women, midwives come across partners of pregnant women more often during antenatal appointments than family members or friends.
---
"Partners often attend the first consultation [with the midwife], but their attendance later in pregnancy varies. Some partners we never see and others come along every time. On average, they come along 3 or 4 times. [ … ] Sometimes a [grand] mother comes along, but not very often. We even less often see a friend. " (Midwife 6)
While midwives acknowledge that pregnant women who smoke are usually part of a social circle in which smoking is common, midwives noted that they do not commonly ask members of the pregnant woman's social circle if they smoke:
---
"When I ask a pregnant woman about smoking, it could be that her mother also spontaneously says something about smoking, but I am not going to ask mothers, sisters, or friends about their smoking. " (Midwife 13)
When midwives reported when they talk about smoking with partners, this usually happens only once with limited or no follow-up:
---
"I always mention in the first consultation that quitting smoking together is easier. If I am being honest, after that I let go of the partner a little bit and I focus on the pregnant woman. I mention it [smoking cessation] again to pregnant women, but I won't mention it again to the partner. " (Midwife 2)
Further, when midwives do talk about smoking with partners or members of the pregnant woman's social circle, the focus may be limited to harm reduction. As the quote below reveals, the midwife views smoking cessation as not necessarily feasible: "Well, smoking outside is often the highest attainable [goal], because they have complicated lives and it [smoking] is just hard to deal with. " ( Midwife 11) This view may influence how midwives address the smoking behavior of pregnant women's partners.
---
Barriers to addressing smoking with members of the social circle
Midwives faced barriers to provide cessation support to the social circle of pregnant women. One barrier midwives perceived is the lack of a trusting relationship with the social circle of pregnant women:
---
"I wouldn't proactively ask a mother [of a pregnant woman] during a consultation if she smokes, even though I know it would be good to do. [ … ] I feel less connected to the pregnant woman's mother than to her partner. " (Midwife 2)
Moreover, midwives expressed that they do not want to risk their relationship with the pregnant woman by discussing smoking with her social circle:
---
"The trusting relationship can be damaged by involving her mother. That's why it's such a big problem all around. " (Midwife 5)
Concerns about raising the topic with members of the social circle of pregnant women were expressed by the interviewed midwives:
---
"Sometimes it feels a little … how can I explain that … it feels a little like I'm interfering too much with them. " (Midwife 3)
According to some midwives, partners do not want to talk about smoking or smoking cessation with them:
---
"..I think that partners are less open to smoking cessation support [than pregnant women], because they don't really see why a midwife should discuss this with them. " (Midwife 2)
Midwives were concerned about giving unwanted smoking cessation advice to the members of the social circle. They were also concerned that the social circle do not think that midwives are the right professionals to receive cessation advice from.
Another barrier many midwives perceived is a lack of opportunity to discuss smoking:
---
"We only see them [pregnant women and partners] for about seven months or so and after that it stops. That is a short amount of time. I'm in a practice with four midwives, so how many times do you see someone personally?" (Midwife 11)
When a pregnant woman receives care at a group practice, she may see different midwives during her pregnancy. As partners and/or other members of a woman's social circle do not attend every visit, the midwife has limited interactions with a pregnant woman's friends, family, and partners and limited opportunities to address smoking.
Midwives stated that most partners had already made changes to their smoking: "You know, in general, of course that [quitting smoking] is discussed at some point during the pregnant woman's care. In 9 out of 10 cases, a partner has already made changes and goes to the balcony or outside to smoke. If the pregnant woman is okay with that, who are we to say that it is not okay?" (Midwife 9)
According to midwives, most partners believe that they are already doing enough by smoking outside, a belief which is echoed by the pregnant women themselves. As a result, some midwives did not encourage partners to make further changes by quitting smoking.
---
Midwives' role in assisting the social circle with cessation
The midwives described being ambivalent about their responsibility to provide smoking cessation support to a pregnant woman's social circle. When asked about the midwife's role with regard to assisting the social circle with smoking cessation, one midwife stated:
---
"Yes and no, I think our role as midwife is to take care of the health of the mother and child. The social circle can help, but I think it is their own responsibility too. [ … ] I am not sure if I am really responsible for that [smoking cessation support of the social circle]. " (Midwife 5)
However, other midwives noted that cessation advice to the social circle would be important to do as part of care for pregnant women: "On the one side it is [the role of a midwife], because I think that if the social circle is engaged, more pregnant women will successfully quit smoking. This is eventually our goal. On the other side, we must draw a line somewhere, also from a time/resources point of view. " (Midwife 2) Some midwives saw a limited role in engaging the social circle.
"I think that it is the role of a midwife to get a picture [of the smoking behavior] of the social circle. I don't think it is our role to mobilize the social circle in smoking cessation. I find it hard to tell if that fits our role. " (Midwife 6) Midwives stated that other healthcare professionals, such as General Practitioners (GPs) and youth health care providers, could play an important role in smoking cessation counselling to both partners and the extended social circle of pregnant women.
---
"GPs have a more equal treatment relationship with both the partner and the pregnant woman [than a midwife]. I have a better relationship with the pregnant woman of course. This makes the GP more suitable to provide smoking cessation counselling to the social circle [than a midwife]. " (Midwife 2)
While midwives indicated that a GP could help pregnant women and her social circle with smoking cessation, actively referring smokers to smoking cessation services is often not done, in part due to a lack of knowledge:
---
"For us it was a problem that we didn't have a guide of social services, where we refer someone if we want to refer them [pregnant woman and social circle]: what are the costs, what is useful and what is not?" (Midwife 6)
In general, most midwives agree that they have to play some kind of role in smoking cessation counselling for the social circle, albeit a very limited one.
---
"As a midwife my initial focus is on my important task of taking care of the baby and the mother. Of course, this [the social circle] is also included in this, but still I find it more important to focus on the pregnant woman than on her social circle. " (Midwife 8)
---
Discussion
The interview data shows that midwives acknowledged the importance of a pregnant women's social circle when it comes to smoking cessation. However, while they found it important, the interviewed midwives noted that they don't commonly provide smoking cessation support to members in the social circle.
In the Netherlands, primary care midwives are required by the Healthcare Inspectorate to counsel pregnant women and their partners to quit or reduce smoking by using the V-MIS protocol [28]. While partners are specifically mentioned in this protocol, other members of pregnant women's social circle are not included. Absence of clear guidelines and protocols could influence the provision of effective smoking cessation advice to the pregnant women's social circle [41]. The interviews showed that midwives were often ambivalent about their responsibility to provide smoking cessation support to a pregnant woman's social circle. This ambivalence may have influenced the midwives' interactions with the social circle.
Midwives encountered a variety of barriers when addressing smoking with members of a pregnant woman's social circle. One such barrier was the fear of jeopardizing their relationship with the pregnant woman when they discuss smoking with her social circle. Having a trusting relationship with pregnant women was seen as needed for addressing tobacco use and exposure; previous studies found that midwives were concerned about adversely affecting the relationship they have with pregnant women by (repeatedly) asking about smoking [19,42]. In addition, the interviewed midwives found it challenging to discuss smoking cessation with members of the social circle, as they felt less connected to them than to pregnant women. Research conducted in the Netherlands on nurses working within a preventive care program for disadvantaged young women during and after pregnancy (2019) [43] found that the ability to build on a trusting relationship with pregnant women was seen as useful for discussing smoking. As seen in our data and in the literature, addressing smoking and discussing smoking cessation is affected by having and keeping a trusting relationship with the pregnant woman and her social circle.
Midwives found it challenging to motivate partners to quit smoking when they had already taken steps to prevent exposing pregnant women to tobacco smoke. A study by Gage et al. [44] (2011) found that partners of pregnant women would rather reduce their smoking or smoke outside than quit smoking completely. Our data collection showed that, according to midwives, pregnant women and their partners believed that smoking outside was a legitimate way to reduce the risk and harms from exposure to second and thirdhand smoke, with complete smoking cessation by the partner seen as unnecessary. Some midwives in this study also believed that harm reduction, mainly smoking outside, is the highest attainable achievement a pregnant women's social circle could reach. This may have influenced why and how midwives didn't discuss quitting smoking completely with partners. However, it is important that partners quit smoking completely because women are less likely to quit themselves if their partner smokes [45] and smoking outside does not completely prevent second and thirdhand smoke exposure [46].
A study on the role of midwives and gynecologists in smoking cessation care of pregnant women in Belgium (2015) found that healthcare professionals saw their role as limited to asking about smoking, providing brief advice, determining the readiness to quit, and referring clients to specialized cessation counseling [41]. The interviewed midwives in the current study also saw a role for other healthcare professionals, especially GPs, in helping the social circle of pregnant women quit. In the Netherlands, health insurance is compulsory; in 2016, less than 0.2% of the population was uninsured [47]. Since 2020, health insurance covers one primary care smoking cessation program per year [48]. Referring to other healthcare professionals for (more intensive) smoking cessation counselling is part of the V-MIS protocol [27]. However, findings from the current study show that referring smokers for more intensive cessation counselling is often not done and referral options were not well known by midwives.
---
Limitations
This study has a few limitations. Birth care providers were self-selected, which could indicate an interest in smoking cessation. A more representative group of birth care providers may have different insights and experiences. Our study only took into account the perspectives of Dutch birth care professionals in providing smoking cessation advice to the social circle. Future research should explore the views of the social circle in receiving smoking cessation from birth care providers.
---
Conclusions
This study provides insights in why midwives in the Netherlands may be reluctant to actively provide smoking cessation advice to the social circle of pregnant women. The interviews showed that midwives can be ambivalent about their responsibility to provide smoking cessation support to a pregnant woman's social circle, which may influence the interaction they have with the social circle. In addition, midwives may have barriers to discussing smoking cessation with the social circle of pregnant women, such as a lack of a trusting relationship with the social circle, concerns about raising the topic, and giving unwanted advice on cessation to members of the social circle and a lack of opportunity to discuss smoking.
---
Practical implications
Pregnancy can be a teachable moment for smoking cessation for members of pregnant women's social circle [23][24][25]. Partners of pregnant women can be engaged in smoking cessation efforts by advising them on the risks of secondhand smoke and on quitting smoking [8]. To overcome barriers such as damaging their trusting relationship with pregnant women, educational programs or new modules for existing program could be used to improve skills related to discussing smoking with the social circle of pregnant women [19,37].
Clear guidelines and protocols on the role of birth care providers in providing smoking cessation support to the social circle could help midwives overcome ambivalence that they might have. While midwives have a unique role in providing smoking cessation advice and support to pregnant women and members of pregnant women's social circle, the smoking cessation advice, support, and care that they can provide is limited by time and opportunities to interact with the social circle and by their role and skills, which may not be the best suited for intensive smoking cessation support for members of the social circle. To that end, it is crucial that midwives know where and how to refer smokers to other healthcare professionals for more intensive smoking cessation care [41]. The development and implementation of care pathways in primary care midwifery practices could contribute to a better referral to other healthcare professionals [49].
---
Availability of data and materials
The datasets generated and/or analyzed during the current study are not publicly available because it was a qualitative study but are available from the corresponding author in an anonymized form if requested.
---
Abbreviations
---
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12913-022-08472-7.
---
Additional file 1.
Interview guide for birth care providers.
Additional file 2. Coding scheme.
---
Additional file 3. Identification of themes.
Authors' contributions EW and LS were involved in the design of the study, data collection and the analysis of the interviews. JB was involved in the recruitment of the participants. BJHW reviewed and revised the manuscript and contributed to the discussion section. LS, JB and MCW were involved in the revision of the manuscript. All authors read and approved the final version of the manuscript.
---
Declarations Ethics approval and consent to participate
The research was approved by the Trimbos Ethics Committee (2362208) and the research was carried out in accordance with the 1964 Helsinki Declaration and its later amendments. Prior to interviews, all participants signed a consent form stating that they were informed that participation was voluntary, that they could withdraw at any time, that they were willing for the interview to be recorded, and that the data would be analyzed anonymously. The data presented in this article comes solely from the interviews with birth care providers and does not include any data from patients or any identifiable data about patients or members of patients' social circles.
---
Consent for publication
Not applicable.
---
Competing interests
The authors declare that they have no competing interests.
• fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance
• support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
---
•
At BMC, research is always in progress.
---
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ? Choose BMC and benefit from:
? Choose BMC and benefit from:
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
The number of centenarians is rapidly increasing in Europe. In Portugal, it has almost tripled over the last 10 years and constitutes one of the fastest-growing segments of the population. This paper aims to describe the health and sociodemographic characteristics of Portuguese centenarians as given in the 2011 census and to identify sex differences. Methods: All persons living in Portugal mainland and Madeira and Azores islands aged 100 years old at the time of the 2011 census (N = 1,526) were considered. Measures include sociodemographic characteristics and perceived difficulties in six functional domains of basic actions (seeing, hearing, walking, cognition, self-care, and communication) as assessed by the Portuguese census official questionnaires. Results: Most centenarians are women (82.1 %), widowed (82 %), never attended school (51 %), and live in private households (71 %). The majority show major constraints in seeing (67.4 %), hearing (72.3 %), and particularly in their mobility (83.7 % cannot/have great difficulties in walking/climbing stairs and 80.7 % in bathing/dressing). In general, a better outcome was found for reported memory/concentration and understanding, with 39.1 % and 42.5 % presenting no or mild difficulty, respectively. Top-level functioning (no/mild difficulties in all dimensions concurrently) was observed in a minority of cases (5.96 %). Women outnumber men by a ratio of 4.6, and statistically significant differences were found between men and women for all health-related variables, with women presenting a higher percentage of difficulties. Conclusion: Portuguese centenarians experience great difficulties in sensory domains and basic daily living activities, and to a lesser extent in cognition and communication. The obtained profile, though self-reported, is important in considering the potential of social and family participation of this population regardless of their functional and sensory limitations. Based on the observed differences between men and women, gender-specific and gender-sensitive interventions are recommended in order to acknowledge women's worse overall condition. | Background
Although centenarians still represent a small proportion of the total world population, their number is projected to increase rapidly from approximately 441,000 in 2013 to 3.4 million in 2050 and 20.1 million in 2100 [1]. The rise in the number of centenarians has attracted research interest all over the world, particularly for the last two decades, in which several centenarian studies have been conducted in Europe following the examples of long-term centenarian studies conducted in the US and Japan [2].
Collectively, international studies have presented wide-ranging information about centenarians' sociodemographic characteristics [3], longevity patterns from a psychosocial and health perspective [4], and need for and use of health care services [5,6], to name a few of the research focus areas. In Portugal, although the number of centenarians has almost tripled over the last 10 years from 589 in 2001 [7] to 1,526 in 2011 [8] and the first population-based study has been recently established, the PT100 Oporto Centenarian Study [9], no large-scale information has been made available on this age group, particularly on their overall health status.
This study aims to present the main sociodemographic characteristics of Portuguese centenarians based on data from last National Census, and to provide a first overview of their elementary health profile in terms of sensory functions (hearing, vision), functional status (walking/climbing stairs and bathing/dressing), cognition (memory/concentration), and communication (understanding/being understood). This set of questions has been developed by the Washington Group on Disability Statistics and is consistent with the International Classification of Functioning, Disability, and Health (ICF) [10]. Sex differences are also investigated.
---
Methods
The present study is based on information provided by Statistics Portugal (INE) and collected within the framework of the 2011 census [11,12]. It considers information about people aged 100 years or more at the moment of census data collection: gender (male or female), marital status (married, single, widowed, or divorced), number of years of education completed (illiterate, 4 years, 6 years, 9 years, 12 years, or higher education), income (pension, family support, properties or business, social support, other), religion (Catholic, other, without religion, or not available), and self-reported disability, which was assessed by a general question about having difficulties doing certain activities due to health problems or aging: difficulty seeing, even with glasses; difficulty hearing, even if using a hearing aid; difficulty walking or climbing steps; difficulty remembering or concentrating; difficulty with self-care such as washing all over or dressing; difficulty communicating (understanding or being understood by others). Each of these questions had three response categories: (1) No difficulty or some difficulty; (2) A lot of difficulty; (3) Cannot do it at all.
Information about type of residence (community or institution) and geographical mobility was also collected. An exploratory analysis was performed in order to characterize Portuguese centenarians. Differences between men and women were examined using chi-square test or Fisher's exact test (if assumptions of application of chi-square test were not checked). A significance level of α = 0.05 was considered.
---
Results
In 2011, the number of centenarians in Portugal mainland and Madeira and Azores islands was 1,526, of which 1,253 (82.1 %) were females. Women outnumber men by a ratio of 4.6. The majority of the population was widowed (1,251, 82.0 %), followed by singles (172, 11.3 %), married (86, 5.6 %) and divorced (17, 1.1 %). As for educational level, 940 (61.6 %) centenarians were illiterate, 436 (28.6 %) had completed up to four years of school, and 121 (7.9 %) had completed between 6 and 9 years of school. About half of them (51 %) never attended school (of these, 52.6 % know how to read and write and 47.4 % do not know how to read and write), and only a small minority had higher education (29, 1.9 %).
The great majority (1,084, 71.0 %) lived in private households, and 29 % (442) lived in institutions (e.g., nursing homes). Most centenarians (57 %) were born in the place they currently live (37.5 % in the same town, and 19.5 % in the same council, different town). Monthly income came mostly from their own pension (1,444, 94.6 % of the centenarians), with a minority (4.5 %) relying on their family as the main source of economic support. The majority was Catholic (1,316, 86.2 %).
With regard to the centenarians' sensory functions, functional status, cognition, and communication (Fig. 1), most reported having great difficulties in seeing (58.2 %) and hearing (63.4 %). Understanding others/being understood was the dimension with a higher percentage of individuals presenting no/mild difficulties (649, 42.5 %), and walking/climbing stairs was the dimension with a lower percentage (248, 16.3 %). On the other hand, almost half of the 1,526 centenarians (710, 46.5 %) mentioned being totally unable to take a bath/dress by themselves. Vision and hearing were the dimensions with a lower percentage of total limitation, with 140 (9.2 %) and 136 (8.9 %) centenarians, respectively.
Gender analysis is presented in Table 1. With the exception of income source, all other sociodemographic characteristics of centenarians vary according to gender. Marital status and gender were associated (p < 0.001), standing out a high percentage of married males (21.6 %) compared to married females (2.2 %). Analyzing education level, also associated with gender (p < 0.001), there was a higher percentage of illiterate females (65.0 %) when compared to males (45.8 %). In both sexes, the majority of centenarians lived in the community; however, an association between sex and type of residence was also observed, with a higher percentage of males living in the community (78.8 % for males and 69.4 % for females, p = 0.002). Although the 42.5 religious profile was similar for males and females, sex and religion were associated (p = 0.007), with a slightly higher percentage of males without religion (4.0 % for males and 1.1 % for females).
The comparison of sexes according to sensory functions, functional status, cognition, and communication revealed statistically significant differences for all the dimensions under analysis. Overall, females recurrently present greater difficulties in performing activities due to health problem or aging: 10.5 % could not see at all, whereas only 2.9 % of the males were in this situation (p < 0.001); being totally unable to hear was reported by 9.6 % of the females and by only 5.9 % of the males (p = 0.049); 41.1 % of the females could not walk/climb stairs, a situation reported by 21.2 % of the males (p < 0.001); being unable to memorize/concentrate was mentioned by 24.8 % of females and only 8.8 % of males (p < 0.001); half of females reported being totally unable to bathe/ dress by themselves, whereas this happened in 31.1 % of the male population (p < 0.001). In the communication dimension, 20.1 % and 7.3 % of females and males, respectively, mentioned being totally unable to understand others/being understood (p < 0.001).
When combined, only 38 (2.49 %) centenarians showed no capacity in all considered dimensions (sensory functions, functionality, cognition, and communication). Of these, most were women (35, representing 2.29 % of the total number of centenarians and 2.79 % of all female centenarians). On the contrary, 91 (5.96 %) centenarians were at the top level of functioning, i.e., presented no/mild difficulties in all considered dimensions concurrently. Most were females (62, 4.06 % of the total centenarians, and 4.95 % of all female centenarians), but within the group of male centenarians there were up to 29 (10.62 %).
---
Discussion
This is the first descriptive large-scale health profile of Portuguese centenarians ever conducted. Overall, along with the expected constraints in sensory functions and the presence of great difficulties in basic daily living activities (viz., mobility) and cognition (memory/concentration), a significant proportion of centenarians was found to have no/mild difficulty in understanding others and being understood. Although subjectively measured, these health-related dimensions provide an important basic health profile with implications for service programming. First, it reveals that most centenarians do not generally present a positive outlook and may potentially be in a frail condition; second, it discloses the need to pay attention to this population's care-provision needs, namely to the maintenance of their capacity to express their wills (ultimately their autonomy) regardless of the reported sensory constraints and functional difficulties.
In being able to understand others/being understood (42.5 %), the capacity of centenarians to express personal resolutions about their own lives according to personal rules and preferences must be acknowledged. This is a crucial aspect of mental health in advanced ages. Considering that centenarians often present high prevalence of diseases and chronic conditions that put them at risk of experiencing limited or restricted participation in society, being able to communicate with no/mild difficulties (expressing wishes, goals, and preferences) ought to be of crucial importance and must therefore be incentivized by care providers, particularly when difficulties in other domains (e.g., functionality, sensory, mobility) may limit independent living or social integration if appropriate accommodations are not made.
In addition to centenarians' care provision needs (and arguably their health service utilization, though that cannot be inferred from our results), equal attention must be given to their living arrangements, housing conditions, and provision of informal care. Considering that the great majority of centenarians live in private households (71 %), most probably with their immediate family, having a deeper insight into the flows of care provision in multigenerational households is imperative. Encouraging and sustaining family-and community-based care for the elderly is likely to be more cost-effective than residential and nursing care placements, and it is certainly a traditional scenario for Portugal due to the strong tradition of familism that characterizes southern European countries [13], but it raises important questions on the circumstances in which the care is provided. How to care for a very old relative while the caring family members themselves are in advanced age, and how to organize affordable care and medical services that meet the needs of the very old and their families are just two of the currently recognized challenges within centenarian studies [14] that must be taken into account when further analyzing the health circumstances of this population.
As for the second goal of this paper, identifying sex differences, our findings are globally in line with previous studies suggesting that very old men are a minority and tend to present better outcomes than women. Females live longer but suffer a higher level of morbidity, and this has been shown in several studies with the most elderly (e.g., Danish studies [15]) and particularly within centenarian studies from around the world (e.g., China [16], Greece [17]). For instance, in a recent populationbased cohort study of centenarians using electronic health records conducted in the UK, authors found that fewer men than women reached the age of 100, and that women had greater multiple morbidity than men, as well as greater likelihood of having multiple geriatric syndromes [18].
In Portugal, we found that centenarian women substantially outnumber men and present an overall worse health status; but we also found that examining the data for women reveals a more vulnerable social condition. Significant differences were found for marital status, educational level, and living arrangements, indicating that on these domains men present a more favorable situation (i.e., having a spouse alive, being non-institutionalized, and having a higher educational level). These findings are easily understood within sociocultural and historical circumstances and can be framed within a gender lens that characterizes this cohort's life (e.g., men tending to marry younger women, marital status being determined by the mortality rates of spouses, remarriage rates which are more socially acceptable to men). Men and women differ in their life expectancy (shorter for men) and health condition along life (worse for women, on average), and considering the observed differences in the current cohort of centenarians, gender-specific and gender-sensitive approaches to the understanding of health care service needs in very advanced age is to be cautioned.
The feminization of aging is thought to have impact on health outcomes and services, and several authors have argued for a greater focus on the unique needs of women, a gendered approach to policy and intervention development, and the promotion of health across the life span [19]. It is our conviction that such focus ought to include the most elderly population. Particularly in the Portuguese scenario, there is strong evidence of inequity in health against women, and also of the existence of a "gender effect" in health care use [20]. On average Portuguese women report a worse health condition than men, a higher number of disability days, and are more likely to suffer from longstanding illness. The way these conditions link to their socioeconomic status and access to treatment in very old age deserves further attention. Following the trends observed in recent research with centenarians in the Mediterranean that focus on both quantitative and qualitative measures in order to explore perspectives on longevity [21], the study of gender differences and its determinants and consequences should also be conveniently addressed in further studies with this population.
Finally, it is important to highlight a specific sociodemographic aspect of the obtained profile of Portuguese centenariansthe percent of illiteracy/very low educational level. Regardless of the observed differences between men and women on this aspect (more favorable for men), most centenarians never attended school, which is due to sociohistorical reasons (long dictatorship period with scarce access to education). Bearing in mind further research with the Portuguese most elderly population, namely those grouped as "near centenarians," the use of complex assessment protocols is to be used with caution.
The analysis of the current study relied on a dataset limited to the information available for centenarians, which did not allow for analysis on the trajectory of prevalence and the sex pattern of disability in the most elderly (i.e., beyond 85 years old). This is a weakness of this paper that would be important to overcome in future studies. Such analysis of disability, separately for males and females, should be done to place the analysis of centenarians' health in perspective.
---
Conclusions
This study provides important information about the current sociodemographic profile of Portuguese centenarians and describes their elementary subjective health profile as given by national census data. The high proportion of centenarians presenting great difficulties in sensory domains, basic daily living activities, and to a lesser extent in cognitive (memory/concentration) and communication capacities (understanding others/being understood) reveal the need for more information regarding this population's specific care needs, their current arrangements of both formal and informal care, and how this may differ from those of younger cohorts of older people. The Washington Group on Disability Statistics' questions considered in the Portuguese census were designed to provide comparable data cross-nationally and outline a set of domains that were selected using the criteria of simplicity, brevity, universality, and comparability [10]. In being able to capture persons with similar problems across countries, relevant further research will be to compare data on this population (centenarians and most elderly in general) at an international level considering that the WG short set of questions has already been used in a few censuses. Another step will be to analyze the prevalence trajectory of difficulties by age, nationally and cross-nationally. This would provide important information on the population at higher risk for limitations in the ability to fully participate in society due to functional limitations in core domains.
As for the fact that most Portuguese centenarians are living in the community, this finding brings attention to informal caregiving dynamics (and service needs for both care-receivers and caregivers) that might be present in multigenerational households. Finally, the predominance of women among the centenarians and the observed sex differences reaffirm the importance of recognizing gender as a cross-cutting determinant for personal healthy aging trajectories [22]. Differences in health outcomes by sex are common throughout the life-course, but large populationbased studies reporting trends in incidence and the health of centenarians are still scarce and should be conducted due to their pivotal role for planning adequate care.
---
Competing interests
The authors declare that they have no competing interests.
---
Authors' contributions
OR and LA conceived the study and participated in its design and coordination. OR, LA, and LT performed the statistical analysis and data interpretation. OR wrote the manuscript. CP critically revised the manuscript for important intellectual content. All authors read and approved the final manuscript. |
Introduction: Vaccine hesitancy is a significant threat to public health efforts to stop the negative impacts of the COVID-19 pandemic. In India, it is critical to attain high vaccination rates to prevent overload in the healthcare system. Older adults play a central role in families' decision-making, but there is a lack of research on middle-aged and older adults' vaccine perceptions in India in general, and about their concerns about COVID-19 vaccinations. Research question: This study aimed to explore which factors affect COVID-19 vaccine hesitancy in middleaged and older adults in India and what factors can reduce their vaccine hesitancy and increase its uptake. Materials and methods: A mixed-method sequential design was employed to conduct the study. Convenience sampling was used to recruit participants by sending an online invitation. For phase one of the study, a quantitative survey with 34 questions was distributed through WhatsApp. For phase two of the study, qualitative one-on-one interviews were conducted with those participants who completed the survey and agreed to participate in this next phase. Results: In total, 65 individuals responded to the online survey and 10 participated in semi-structured interviews. The participants were residing in India and their age range was from 40 to 89 years. Analysis of the data identified that although the majority of participants supported the vaccine, the main reasons for vaccine hesitancy included uncertainty about the effectiveness of the vaccine, fear of side effects, unclear and insufficient information about the vaccines and altered risk perception. This study also showed that those who felt that the consequences of COVID-19 were mild were also more likely to be vaccine-hesitant.While the results of the study showed that most of the participants supported the COVID-19 vaccines, they expressed uncertainty regarding their effectiveness. The safety and effectiveness of the vaccines were found to be prime contributing factors to vaccine hesitancy in this sample. The findings from this pilot study can be used to develop a larger, more comprehensive study on vaccine hesitancy among middle-aged and older adults in India, which would provide more insights into strategies that can be employed to promote vaccinations. | Introduction
In December 2019, the world saw the first case of COVID-19 in Wuhan, a city of Hubei province, China, and the WHO declared a pandemic on March 11, 2020 [1]. The symptoms of this novel and rapidly spreading disease were largely unknown but were shown to impact physical, mental, and cognitive health [2]. The pandemic had caused enormous disruption to human life and the world economy, presenting an extraordinary challenge to global health [3]. With the rise in the number of cases and COVID-19-related deaths, vaccine development was accelerated, and several pharmaceutical companies began large-scale clinical trials to test new vaccines, including those that were in the early stages of development [3]. There was a rapid development of vaccines to tackle the SARS-CoV-2 virus, particularly by BioTech, Moderna and Johnson & Johnson [3]. In India, three vaccines, Covishield, developed by the Serum Institute of India, Covaxin, developed by Bharat Biotech, and Sputnik V, developed by the Gameleya Research Institute of Epidemiology and Microbiology, also gained approval from the Central Drugs Standard Control Organisation (CDSCO) [4].
The COVID-19 outbreak in India began in Thrissur, Kerala after a few students returned from Wuhan, China [5]. The first wave of the COVID-19 pandemic in India began in March 2020 with thousands of daily infections, but by February 2021, the curve of COVID-19 cases had flattened. However, the spiralling cases during the second wave of the pandemic in March 2021 led to devastating conditions of an overworked health care system, a limited supply of hospital beds, oxygen, medications, ventilators, and rapidly increasing mortality rates [6]. India saw the worst of the pandemic during this period as a result of the high infection rate of the new mutants of the SARS-CoV-2 virus, especially the double-mutant strain of the Alpha (B. 1.1.7) and Delta variant (B.1.617) [7]. The phenomenal speed of infection and the rise in reproduction number (R0) during the second wave of the pandemic can be attributed to the confluence of numerous factors: lack of preparedness of the health care system, non-compliance with social-distancing norms, increased testing, political elections, poorly implemented precautions during festivals and weddings, sporting events, and large-scale religious gatherings like the Haridwar Kumbh Mela and the Tablighi Jamaat [6,8,9]. The Indian health care system cracked under pressure and was unable to keep up with the volume of COVID-19 cases [10]. During this time, India was recording more than 400,000 daily infections and the highest number of deaths over 4,000 in a single day [10]. This was India's worst battle against the virus, as graveyards ran out of space, round-the-clock mass cremations were conducted, and hospitals turned away patients due to the lack of beds and medical supplies [10].
As per the Indian Constitution, health care is the responsibility of the individual states and not the central government [11]. The national health care organization, the Union Ministry of Health & Family Welfare (MoHFW), is the organization responsible for any national-level health care programs and health policy and planning [11]. Despite the continued efforts of the state governments to control the spread of COVID-19, their health care system was crippled under pressure and required leadership from the central government [9]. In this situation, the constitutional limits were crossed, and the central government stepped in to take charge of the situation [9]. Owing to India's population density, differences in health literacy and administrative barriers, vaccinating the entire population was going to be a massive undertaking [11]. The vaccination drive in India began on January 16, 2020 [4]. The central government also launched a mobile application called CoWin (Covid Vaccine Intelligence Network) for self-registration of vaccination slots, along with monitoring and surveillance of the number of doses administered [4].
In India, a certain level of vaccine hesitancy was expected with the novel coronavirus disease [4]. Several challenges to achieve high vaccination rates had already sprung up and undermined efforts to control the pandemic [4]. A limited number of vaccine slots on the CoWin app, vaccine shortage and cost of vaccines had posed challenges in the early phases of the vaccine drive [4]. Initially, they were not offered free of charge, and the price of the vaccine steadily increased and varied from one hospital to another [12]. Furthermore, barriers to registering on the government website, which was initially available only in English, intensified inequalities, deepening the technological divide in the country [13]. However, the government has modified the program by waiving pre-registration on the portal and making vaccines free for all [4]. Affordability, availability, and access to vaccines due to the severe demand and supply mismatch were major concerns [4]. Despite the extensive measures to provide information about the precautions and vaccination plans through telecommunication platforms, the rampant spread of COVID-19 misinformation has been posing major threats to vaccine uptake [14]. For example, myths about vaccines causing infertility and disrupting the menstrual cycle, consuming alcohol to treat COVID-19, previous Bacillus Calmette-Guérin (BCG) immunization as an effective measure to prevent COVID-19 infection and a previous infection of malaria making a person immune to COVID-19 have been leading to vaccine hesitancy, especially in rural areas [4]. Misleading information spread in Hindu and Muslim communities relating to the vaccines containing pork and aborted fetal tissue also proved detrimental to the vaccine drive [14].
The existing literature does a fair job of assessing the concerns about the COVID-19 vaccine, including in India [15,16], but there is a paucity of research on the perceptions regarding COVID-19 vaccinations among middle-aged and older adults in India. A narrative review by Troiano et al. looked at vaccine hesitancy in students and the general population in India [17]. Additionally, a study by Umakanthan et al. also documented the results of a national survey, demonstrating the importance of vaccination coverage within the country [18]. However, none of the studies focused on middle-aged and older adults in India. In many Indian families, the elders in the family are the primary decision-makers that influence overall behaviours and practices [19]. Due to these cultural beliefs and the importance of elders in the family, it is fundamental to understand the vaccination perceptions of this age group since they are largely responsible for healthrelated decision-making in the family. The goal of this paper is to address this gap and examine what are the major factors that affect COVID-19 vaccine hesitancy in middle-aged and older adults in India. Drawing on a small, convenience sample of middle age and older adults residing in India, this paper aims to offer initial insights about attitudes towards vaccinations and COVID-19 vaccines in this population group.
---
Materials And Methods
This exploratory pilot study utilized a sequential mixed-methods design where online quantitative surveys were followed by qualitative individual interviews. After receiving ethics clearance from (blinded for peer review), an invitation for an online, anonymous survey was distributed to a small sample of people utilizing convenience sampling. WhatsApp social media platform was used to recruit participants. The recruitment message contained a link to the online survey.
---
Data collection and analysis
---
Quantitative Online Survey
The survey, containing 34 items on vaccine hesitancy was distributed in September 2021, when the Delta strain initiated the second wave of the pandemic in India. Survey development was based on a validated scale from a study by Wong et al. [20] and the questions were modified to be compatible with the context of the COVID-19 pandemic in India. The questionnaire consisted of several sections that covered demographics, vaccine acceptability, perceived severity, susceptibility, benefits, barriers, level of trust in the government, and individual behaviour. To assess their level of agreement, concern, severity, and likelihood with respect to COVID-19, the variables were measured on a 5-point Likert scale. For example, to assess the level of concern, participants were asked "How concerned are you about the following?" with the following sub-questions: (1) The survey was housed on the Qualtrics platform and opened up with information about the study. Participants consented to the survey by clicking the "I agree" button to indicate their willingness to participate. At the end of the survey, the participants were asked if they would be willing to be contacted for the one-on-one interview, and those who consented were prompted to provide their email addresses for future contact. Through the anonymous online survey, the quantitative data collected on Qualtrics was exported to Excel where descriptive analysis and summary statistics were used for data analysis. Descriptive statistics were used to highlight any relationships and patterns between the data obtained using histograms and pie charts [21]. The data were organized and coded by converting the responses into a numeric format to extract themes and correlations.
---
Qualitative One-on-One Interview
For Phase two of the study, 10 one-on-one interviews were conducted with individuals who agreed to be contacted for a follow-up interview. These interviews were conducted during the third wave of the pandemic in February and March 2022 when the Omicron strain was prominent. Questions were open-ended and included short demographic probes about age, gender, family composition, and decision-making powers, as well as more in-depth questions about participants' overall response to the COVID-19 pandemic, as well as about their perceptions about COVID-19 vaccines' safety and efficacy. Post transcription and removal of identifiers from the open-ended interviews, a thematic analysis framework was used to analyze qualitative or textual data to generate codes and identify themes [22]. Braun and Clarke's six-step repetitious method was utilized for coding and generating themes to interpret and report the findings [22]. This involved: (1) familiarization with the data, (2) initial coding from related comments, (3) generating preliminary thematic groups, (4) reviewing themes, (5)
---
Final Data Analysis
Once quantitative and qualitative data analyses were completed, a convergent parallel design was utilized for the final interpretation. This concurrent methodology enabled the independent collection of diverse yet complementary data during a similar timeframe and helped merge the results to obtain a comprehensive evaluation [23]. The themes and patterns identified during quantitative and qualitative data analysis were used for a side-by-side comparison jointly displaying both forms of data [23]. For this purpose, tables and figures were used to summarize the findings and concisely present them by the major themes identified. Figure 1 demonstrates the steps involved in the data collection and analysis process.
---
FIGURE 1: Steps involved in data collection and data analysis
---
Respondent Profile
A total of 65 people participated in the survey. Six responses were discarded because the surveys were incomplete, or the age criteria were not met. Thus, the final number of responses for the survey was 59. Table 2 summarizes the demographic profile of the survey respondents. The average age of the participants was 53 years. Among the participants, 61% (n=36) self-identified as women and 39% (n=23) as men. In this participant group, 67% (n=40) of them held an undergraduate degree, most of the respondents were married (90%, n=53) and lived with four-six people in their households (66%, n=39).
---
TABLE 2: Socio-demographic profile of survey participants
At the time they completed the survey, 80% (n=47) of the participants had received a vaccine for COVID-19 and most of them (73%, n=43) had received both doses. Of the people who got the vaccine, 24% (n=14) waited for others to get the vaccine before scheduling a shot. Results from the quantitative survey also showed that women (89%, n=32) were more likely to receive the COVID-19 vaccines when compared to men (65%, n=15) as seen in Figure 2.
---
FIGURE 2: Proportion of men and women who received and did not receive a vaccine for COVID-19
---
Results
After analyzing the quantitative and qualitative data collected, three key themes were identified during the analysis: (
---
Perceptions about COVID-19 disease and COVID-19 vaccine in India
As seen in Table 3, more than half of the respondents (54%, n=32) 'strongly' and 'somewhat' agreed that their friends and family were at risk of getting COVID-19, and most believed that they were at risk of contracting COVID-19. Evidently, while Participant 6, like many others, was unsure about the impact of new variants, he still perceived the government policies to be "good" for fighting COVID-19.
However, not all participants trusted the government or saw COVID-19 as a real threat. Some, like Participant 8, a 45-year-old woman, had a different perspective: "I have not been scared at all. We have to live this way, there is nothing to fear. Its.. it's like the flu, it's like a common cold. It comes and goes away. There is no reason to be scared. My daughter works in a lab, but I'm not worried because we all are very healthy and we eat very good food, so there is no reason to be scared during the pandemic. I think the best way to avoid COVID-19 is by taking homeopathic medicines. I'm drinking warm Haldi milk and chyawanprash every single day. You see here, these antiseptics will cure everything, and they will make your body stronger. Just rely on home remedies and focus on eating well." (Participant #8)
Comparing COVID-19 to a "common cold", Participant 8 normalized the disease, presenting it as insignificant and not particularly scary. This can be attributed to the period in which the interviews were conducted where India saw the emergence of the highly transmissible but less deadly Omicron variant which led to fewer hospitalizations despite the rising number of infections. The participant also referred to the use of alternative medicine to prevent the onset of COVID, a conviction that made her not fearful of contracting COVID.
When the concerns about COVID were minimized, it also affected individuals' decisions to get a vaccine. For instance, Participant 9, a 45-year-old man, discussed the shift towards working from home and having less contact with people as a primary reason for not getting the vaccine. He said:
"I really don't think I need it because I have been working from home since the start of the pandemic. I haven't really been going out, so I cannot get infected. We get our groceries online. We get food delivered. We get clothes online. We don't attend weddings or any functions. Close to half of the respondents 'strongly' and 'somewhat' disagreed that the COVID-19 vaccine will protect them from getting and spreading the infection as seen in Table 5 below. However, a large group i.e., 80% (n=47) also 'strongly' and 'somewhat' believed that taking the vaccine can decrease the severity of illness and the chances of complications during a COVID-19 infection. Additionally, a majority of the participants also 'strongly' and 'somewhat' agreed that the COVID-19 vaccine will provide their family members with protection against the virus.
---
TABLE 5: Perception of COVID-19 vaccines
In the interviews, some participants expressed their belief that vaccines help to reduce the severity of COVID-19 symptoms, suggesting that with vaccines, "people are having milder infections and are not getting hospitalized as seen in the second wave" (Participant #6).
While the survey results suggested that participants considered themselves and their families at risk for COVID-19 infection, results from the interviews showed complexity in the perceptions regarding the disease. The low levels of perceived risk and a decline in trust in the government seemed to be important drivers for lower acceptance of the COVID-19 vaccines. The shift to working from home, especially during lockdowns also led to dismissing the pandemic as "mild", which, in turn, altered the perceived risk of getting the disease.
---
Safety, efficacy, and availability of COVID-19 vaccines
In the survey, participants were also asked whether they thought that getting the COVID-19 vaccine was the best way to avoid complications post-infection and as seen in Table 6, the majority of the participants agreed with this statement, indicating their trust in the COVID-19 vaccines to minimize hospitalizations and prevent future complications.
---
TABLE 6: Level of agreement about post-COVID-19 infection complications
The interviews, however, showed the nuances in the level of trust in the COVID-19 vaccines. While most participants believed that the vaccines were effective in protecting against severe illness, they also pointed out that the COVID-19 vaccine did not offer complete protection against infection and reinfection with newer variants of the virus. Participants were also concerned about transmitting the virus to family and friends even after getting two shots of the vaccine.
Several participants also raised concerns about COVID-19 vaccines' effectiveness referring to some of the (mis)information on their ingredients, delivery, and safety as seen in the quotes below. Participant 7, a 70year-old man, for instance, was questioning the effectiveness of the vaccine and was also concerned about the vaccine changing fertility patterns and causing more harm than good. He said:
"A lot of people say that there are several side effects of the vaccine, some people say that you cannot have a child if you get the vaccine… like you get infertile. Some people say that you will get a heart attack if you take the vaccine. Some people say that there will be blood clots with the vaccine and you could even die so this should be a personal choice and not forced. I don't want it." (Participant #7)
His fear of serious adverse side effects led to the downright rejection of the COVID-19 vaccine. An important event that got a lot of media attention was the death of a well-known Tamil actor Vivekh, just two days after taking the COVID-19 vaccine during the second wave of the pandemic. Participant 5, a 50-year-old-woman highlighted this important event, noting that "many people were afraid to take the vaccine after this and thought that it had to do with the government vaccines." After Vivekh's death, the increased fear of death among the people led to a change in public attitudes regarding vaccines. This was reiterated by Participant 6 who discussed how rumors and misinformation around the Indian COVID-19 vaccines affected people's intent to get vaccinated. He said:
"There are rumours and the word is floating around that covishield and Covaxin vaccines will give you a lot of side effects. And the rumours about the vaccine being not correct or not good or being harmful and stuff like that. So that plays havoc in the minds of people, people who are gullible or people who don't have their own mindset. It really, you know, gives them a lot of trauma, and that they do it for their own political or political or their personal gains. So, that is rampant here." (Participant #6)
Displaying concern around misinformation, he suggested that false information for political or personal gains affected vaccine uptake. Similarly, this issue regarding the lack of transparency was also seen in Participant 10, a 51-year-old-woman who said:
"Yeah, I don't trust the government. I've heard cases where they are substituting water, substituting the vaccine with water, or even giving saline solution instead of the vaccine (in) some places. I've heard that they are just injecting air. I feel like the government does not give a lot of attention to villages in small towns. It is only focused on big cities, so I don't trust the government very much." (Participant #10)
Calling attention to the vulnerabilities of the rural population, she condemned the government's response to COVID-19 vaccinations, highlighting the issues faced by the neglected rural areas when compared to larger, metropolitan cities. This was also intensified by the inconsistencies in the government's pricing structure for COVID-19 vaccines. Participant 3, a 54-year-old-women said:
"Some places they were charging Rs. 500, Rs. 600 and some places they were charging Rs. 1500. Due to this, some people were substituting it with water for money. They were incentivizing people to pay more to get the vaccine faster and can save their families. Now it is free so it is proper. There is no reason for the government to give fake vaccines. Trust has increased. Free vaccination has helped everyone trust the government." (Participant #3)
While most participants believed in vaccines, some felt that they were not effective by suggesting that: "Today's situation if we see, we are not happy about vaccination that even after taking two doses, people are getting infected for the second and third time." (Participant #5), demonstrating uncertainty in the effectiveness of the vaccines. This was confirmed by the survey results where the majority of the participants were concerned about the effectiveness and the safety of the vaccine, although only 12% (n=7) expressed 'extreme' concern about the ingredients of the vaccine.
To further understand the challenges faced with COVID-19 vaccinations, the survey asked questions regarding individuals' experiences with the Co-WIN application to book slots for vaccine shots, and
---
TABLE 7: Level of agreement about vaccine drive
Several other concerns were also raised around the COVID-19 vaccines during the surveys. Table 8 displays the responses of the participants on a Likert scale where the 10 statements captured their concerns surrounding COVID-19 vaccines. Overall, the primary barriers that came up were the chances of reinfection, as well as unclear and insufficient information regarding the vaccines. A majority of the participants 'strongly' agreed that they needed the vaccine despite previous COVID-19 infection and did not have religious reasons to question the vaccine. Fear of needles was also not found to be a major reason for concern for most of the participants.
---
TABLE 8: Level of concern regarding the COVID-19 vaccine
As seen in Table 9, an additional concern was related to the lack of COVID-19 information. However, 35.59% (n=21) of the participants reported that they were 'not at all concerned' about vaccine availability. The participants who reported their concern about the lack of COVID-19 information relied on social media (WhatsApp, Facebook, Instagram, Twitter, Snapchat, etc.) and their social circle (friends, family, neighbours, etc) as their most common source of information about the pandemic. Additionally, only 22.03% (n=13) reported their reliance on public health websites (World Health Organisation, Centre for Disease Control, Ministry of Health and Family Welfare, National Health Portal of India, etc.) for their information. Survey results suggested that the majority of respondents displayed trust in the government to control the COVID-19 pandemic, deliver timely care during the pandemic and seemed to be supportive of their measures to control the virulent spread. However, participants reported challenges with the CoWIN application to book vaccine slots and a lack of COVID-19 information. Overall, the comments from the interviews seemed to indicate that the participants had several concerns with the government and the health care system. Concerns regarding reinfection, safety, and efficacy of the vaccines were prominent. However, the shift from charging vaccines to making them free nationwide seemed to have a positive impression on the participants in this study.
---
Public health promotion and education
When asked about the strategies to further promote the vaccine, interview participants offered several approaches to strengthen the current vaccine drive. Participant 6, with a focus on using media to spread the message, suggested increasing health promotion activities to improve awareness of COVID-19 vaccination. He said:
"I was saying you've got to educate people, you've got to instil confidence in them that this (vaccine) is not a bad thing. This is not going to harm them in any way. You do it through by way of advertisements or by way of word of mouth. You know, educating through mass media, through papers, through newspapers, through electronic media, through the internet or whichever way, but the message has to be reached to the people that this is good. And that maybe somebody would be doing but all this, you know, maybe ask a Bollywood person or Hollywood personalities to post on Instagram? If they get into this mode of promoting the vaccine? It should definitely help." (Participant #6)
Focusing on further publicizing the vaccine's safety and benefits as a public health intervention is especially important for educating those who might be living in rural and remote areas, as was noted by Participant 4, who said:
"Mouth to mouth..mobile.WhatsApp forwards. In villages.. education is important. If there is a gathering of people, you should let everyone know that vaccine is important. Or we can even do door-to-door as we did during polio. But that is very laborious. Door-to-door could be a potential strategy. When they go, they can educate the people at their homes and give the vaccine there itself." (Participant #4)
Keeping in mind the enormous size of the Indian population, Participant 4 suggested that medical teams go door-to-door, especially in rural areas, due to the technological divide, to ensure that everyone has received the COVID-19 vaccine. Moreover, to focus on the use of vernacular language for health education, Participant 3 said:
"Education is the most important. Telling them about the results and showing them statistics will help people understand that they are good. Telling people in the local language is also important so they trust the information more. The stats will show them that people are not dying with the shot and improve immunity." (Participant #3)
Region-specific transparent messaging in rural areas could help motivate people to get the COVID-19 vaccine suggesting that if paid close attention to the needs of rural India, vaccine messaging could be more effective and impactful.
Similarly, with a focus on education and building awareness for COVID-19 vaccines, Participant 9 highlighted the importance of targeting educational campaigns for people living in economically weaker areas, specifically, the largest slum in Asia, Dharavi. He stresses the importance of educating this traditionally excluded group to improve health literacy when he said:
"If you look at Dharavi in Mumbai, the largest slum, there is so much superstition and ignorance in those areas. It's such a big slum area and there has to be more awareness, at least for the people who are not educated. Many people think that the vaccine kills people. I do not think it kills people, but I think they are very scared of the side effects. I feel like there's a lot of ignorance I do not think that vaccines will 1000% protect you from the disease but if people do want to take it then they should look at the pros and cons." (Participant #9)
He also brought up an important issue about daily-wage workers and how they are left out of the vaccination programme due to insufficient support available to help them deal with any potential side effects. He said:
"Why it should be mandatory? Also, you have to think about slums and poor people, and daily wage workers. There are so many here in India. You cannot make it mandatory because if they do fall sick after getting the vaccine then the government should pay for their daily wages. They should not be losing out on their day's meal just because they've gotten the vaccine. There has to be some provision for people to take the vaccine and then if they are not well then they should be taken care of." (Participant #9)
Overall, results from the survey and interviews suggested the use of multi-pronged strategies for targeted public health messaging to promote COVID-19 vaccine uptake. Inclusive educational tools, especially for the most vulnerable and marginalized, need to be employed. Participants suggested that targeting these groups of people is critical to combating misinformation and ignorance regarding the pandemic to support their well-being. Additionally, the specific focus needs to be given to economically weaker areas to instil trust in the COVID-19 vaccines and promote their uptake.
---
Discussion
The goal of this study was to identify the factors that affect COVID-19 vaccine hesitancy, identify barriers that affect vaccination attitudes and understand how to improve vaccine uptake. Overall, more participants displayed pro-vaccine attitudes than anti-vaccine attitudes. The main reasons for expressing vaccine hesitancy included uncertainty and concerns regarding the effectiveness of the vaccine, fear of side effects, unclear and insufficient information about the vaccines, insufficient financial support post-vaccination for daily wage workers, altered risk perception and the reliance on home remedies for protection against the virus. In contrast to the western counties, a higher level of public trust and confidence in the Modi government in this participant sample certainly helped India in its fight against the COVID-19 pandemic [24]. While a study conducted in the United Kingdom suggested that people were concerned about the ingredients of the COVID-19 vaccines [25], this study with older adults shows that these concerns were not important in the Indian context.
The tide of vaccine-related misinformation is a critical barrier that needs to be addressed to increase vaccine uptake in India, especially for middle-aged and older adults [4]. False and misleading claims about the COVID-19 vaccine being dangerous and harmful have cast doubt, preventing people from getting vaccinated [4]. During the interviews, several rumours about vaccines leading to infertility, blood clots, disruption of menstrual cycles, the risk for pregnant women, and the fear of death emerged as some of the key reasons for vaccine refusal and vaccine-hesitant attitudes. Speculations about side effects combined with perceived lower severity of the virus were shown to reduce trust in the COVID-19 vaccines. Notably, the primary source of information for the majority of the participants was social media platforms like WhatsApp, Facebook, Instagram, and Twitter. This is important, as the participants in this study were older adults who are often assumed to have low literacy levels when it comes to digital technology. Studies reveal that people who rely on social media for news are more likely to have misconceptions about the COVID-19 vaccine [26]. This study confirms this finding, but also shows that older adults in India might also be exposed to messages on social media and be influenced by misinformation.
Among the interview participants, three out of four people who displayed vaccine-hesitant attitudes lived in rural communities. Concerns about infertility, blood clots, breast cancer, heart attacks and the reliance on Ayurveda and homeopathic treatment were common among this group. There is a rising inequity for COVID-19 vaccinations between urban and rural India with unequal access to vaccines, along with the differences in vaccine procurement and allocation policy between the state and central governments [27]. The "liberalized" vaccine policy has enabled people with access to the Internet to book vaccination slots but excluded those without access from getting vaccinated. This is an important consideration for middle-aged and older adults who may not be technologically literate to access online services. This study shows that in addition to the challenges with physical access to the vaccine, rural residents might also have less access to reliable information about the vaccine, a challenge that can be addressed only with an increase in educational campaigns targeting this specific demographic group.
An important finding of this study was that greater vaccine-hesitant attitudes were found in men when compared to women. This is not consistent with findings in the literature where women were found to be more vaccine-hesitant than men [28][29]. However, this can be attributed to the findings from other studies that reported that women were more adherent to medical recommendations and are more cautious about their health [30]. This positive trend is likely to benefit the Indian population as women are more commonly the caretakers at home and having pro-vaccine attitudes can potentially benefit their families. However, given that the participants in this study had higher educational qualifications and greater awareness about health when compared to the overall population in India, this finding should be interpreted with caution.
The perceptions about COVID-19 in India have changed since the second wave of the pandemic. In 2021, the Delta variant led to millions of hospitalizations and deaths and left several people without medical treatment [4]. The Omicron variant, which became the predominant stain afterwards, was relatively mild and thus caused fewer hospitalizations and deaths. The interviews for this study were conducted during the third wave of the COVID-19 pandemic when a large number of adults were already double-vaccinated and some had even received their third shot of the vaccine. Findings from the study show that although there were some individuals who challenged the efficacy of COVID-19 vaccines, the majority still agreed that they were effective enough to reduce complications post-infection.
The participants also provided interesting insights on increasing COVID-19 vaccine uptake. Door-to-door and mass information campaigns were offered as useful strategies to promote vaccines and stop the spread of misinformation. Specifically, the 'HarGharDastak' campaign for door-to-door vaccination needs to become more widespread, and, according to the participants, might have the most positive impact in rural areas. It would also help to offer some targeted messaging regarding COVID-19 vaccines' misinformation, such as the fears of disruption of menstrual cycles and breast cancer in women and infertility in men, by relying on the voices of celebrities and famous personalities that evoke a high level of trust in population. Moreover, there is a need to establish support for daily-wage earners and migrant workers who might be reluctant to get a vaccine because of a temporary inability to work post-vaccination. Holding support camps for these groups is essential for maximum vaccination coverage.
This study has several limitations. First, given the pilot nature of this study and the use of a small, convenience sample, the findings presented here cannot be generalized to the larger population of India. Another limitation embedded in the research design was the timing of data collection. For logistic reasons, the survey was conducted during the time India struggled with a more deadly Delta variant, whereas the interviews were conducted during the spread of the Omicron variant. The timing of the survey and the interview could have impacted the responses provided by the participants and their attitudes towards vaccinations. Finally, since the survey was offered on the Whatsapp platform, it was not accessible to those individuals who may have had limited access to the internet. Notwithstanding these limitations, this study offered some interesting insights about the views on COVID-19 among middle-aged and older adults in India, which can be utilized to develop a larger-scale studies to learn more about vaccine attitudes in this demographic.
Overall, this exploratory pilot study identified key drivers for vaccine acceptance, which included the perceived health benefits of the COVID-19 vaccine, offering it free of charge, maintaining trust in the government and health care system, and facilitating access to reliable information regarding the COVID-19 vaccination. This study also offered some insights on how to address the gap in the knowledge about COVID-19 vaccines among older adults in India, which, hopefully, can impact the decision-making of their families.
---
Conclusions
COVID-19 vaccines are an important public health intervention in the fight against the pandemic. The findings of this study suggest that the majority of older adults who took part in this pilot study are vaccinated against COVID-19 and/or have positive attitudes towards COVID-19 vaccines. However, vaccine hesitancy is still a concern, especially among those residing in rural and remote areas. It is important to note that the contributions from this paper are only applicable to the sample in this study and more research is required to understand the diverse perceptions of middle-aged and older adults in India. Findings from this sample showed that improved health communication, including non-digitalized information about COVID-19 vaccines, can improve the uptake of vaccines. Focusing on middle-aged and older adults is critical for ensuring that the health promotion messages are an effective way of reaching out to families and reducing vaccine-hesitancy attitudes among this population.
---
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. University of Waterloo Research Ethics Committee issued approval ORE#43498. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. |
Background: Work ability (WA) is an indication of how well someone's health, skills and experience match current job demands. The aim of this study was to ascertain whether the work ability model can provide a useful explanatory framework to understand some elements of sustainable employability (SE) amongst GPs. Methods: A thematic analysis of 19 in-depth interviews with GPs in the Northern Rivers region of NSW, Australia, was conducted and formed the basis for a qualitative validation of the work ability model. Results: In order to provide a more comprehensive reflection on the factors and dynamics found to underpin work ability amongst ageing GPs required the creation of specific subcategories within the WA model. Additionally, new themes relevant to general practice also emerged from the data. The analyses revealed a set of important, new factors and relationships that required additions and refinements to the original model, in order to fully explain sustainable employability in this GP sample. These new emerging themes that required model extension were 'Work-life balance and lifestyle', 'Extended social community' and 'Impact of gender'.While the WA model provides a basic explanatory framework for understanding some elements of sustainable employability amongst GPs, a revision of the current model has been proposed to sufficiently describe the factors impinging on sustainable employability in this group. The extended model can potentially be used for addressing workforce planning issues and to assist in programme design to promote sustainable employability amongst GPs and could potentially be translated to other health professional groups. | Background
Australia has a workforce shortage of general practitioners (GPs), particularly in rural and regional areas [1,2]. This phenomenon is reflected globally, in countries including England, Canada, USA, India, Israel and South Africa [3][4][5][6][7][8]. As GPs are responsible for providing primary care services [9], any deficit in their numbers has a significant potential impact on access to basic medical services and follow-up care [10].
In areas of workforce shortages, GPs are required to work longer hours, often on their own or in a small practice, and at the extremes of their scope of practice [11]. One of the most commonly noted issues for rural and regional doctors is the difficulty in accessing locum support that is timely, affordable or of an adequate quality [11]. Additionally, access to continuing professional development (CPD), including procedural upskilling or specialising can be also difficult to obtain in areas outside major towns and cities [11]. This is a barrier to sustained interest and challenge for GPs and can, together with issues of unsustainable work demands, lead to burnout and early retirement [12]. A further issue is the ageing of the existing GP workforce. Approximately 30% of the GP population in NSW is aged over 55 years and approximately 25% Australia-wide [13], with the average age being 53 years [14]. Not only are rural and regional GPs on average older than their urban counterparts, as a population they also retire at a younger age [12].
Increasing workforce supply is inherently a multifactorial challenge, necessarily entailing both increased recruitment and training of new GPs, of which experienced GPs are an integral part, as well as strategies aimed at minimising early retirement from general practice. It is an important aspect to keep experienced GPs in practice in order to continue to provide healthcare to rural communities as well as to facilitate the training of future generations of doctors. GP recruitment, migration and retention in rural and regional areas are important national matters if shortages are to be addressed [15]. This should necessarily involve the identification of appropriate and effective incentives, as well as strategic efforts directed at addressing barriers and facilitators identified through consultation with GPs.
The work ability (WA) model has historically been used to explain and explore retirement and long-term employability. The WA model was developed in the 1980s at the Finnish Institute of Occupational Health as an instrument to predict retirement age by analysing the interactions of various factors that affect work ability [16]. The model encompasses the resources of the individual, the external factors related to their work, the environment outside of their work and how these factors relate to an individual's workability. The model has been visually depicted as a house, with four interconnected floors, and a surrounding environment to illustrate the interactions of all of these elements (Fig. 1).
The work ability concept blends well with the sustainability movement. People, organisations and governments are increasingly aware that employment goes beyond having a job at one point in time, but rather that we need to think about sustainable employment [17]. Van der Klink and colleagues have defined SE as follows: 'Sustainable employability means that, throughout their working lives, workers can achieve tangible opportunities in the form of a set of capabilities. They also enjoy the necessary conditions that allow them to make a valuable contribution through their work, now and in the future, while safeguarding their health and welfare. This requires, on the one hand, a work context that facilitates this for them and on the other, the attitude and motivation to exploit these opportunities' [18]. While the capability approach is not synonymous with workability, it is important to note the definition here to place the SE concept into a wider context. While the WA model has historically been used to explain and explore retirement and long-term employability, this may not be the same as sustainable employability. The International Standards Organisation has recently released a guideline on Sustainable employability (SE) for organisations [19]. In this guideline, SE for the individual is defined as 'the long-term capability to acquire, create and maintain employment, through adaptation to changing employment, economic and personal conditions throughout different life stages'. Workability can be seen as an element of sustainable employability at one point in time and can also be used as a proxy measure to measure sustainable employability [20]. In a recent study amongst 49 content experts, 97% of participants agreed that workability can be used as a proxy measure to measure sustainable employability [20]. Based on the above, we propose the following definition of sustainable employability, in order for it to be more measurable in practice: 'Sustainable employability refers to a person's ability to gain or maintain quality work throughout their working lives, while having the motivation to conduct quality work maintaining good health and wellbeing and having the opportunity and the right work context co-creating value on personal, organisational and community level being able to transfer skills, knowledge and competencies, to another job, company or other future roles'
The WA model does not necessarily have a future component in the model itself but it may lend itself to categorising and exploring factors influencing sustainable employability. Demonstrating this in the case of general practice, it can help us explore how GPs can 'recycle' current knowledge, skills and abilities for use in future roles such as teaching or GP advocacy work. It can also help us understand how GPs maintain a work-life balance to sustain a busy demanding career in general practice.
More evidence is needed to demonstrate the effectiveness of SE interventions for ageing workers [21,22]. The purpose of this study was to ascertain whether the WA model can provide a useful explanatory framework to understand sustainable employability amongst GPs.
---
Methods
The current investigation is based on a qualitative analysis of data that was previously collected as part of a larger mixed-method study, described in further detail elsewhere [11]. Participants were recruited via the Northern Rivers General Practice Network (NRGPN), which is a local body representing GPs in the region.
The region is a coastal area comprising the far northeast corner of the state. The region depends economically mainly on tourism, with a large number of small towns and comprises localities classified as Small Regional to Medium Large Regional according to the Modified Monash Index [23]. GPs (N = 165) received a study package from the NRGPN containing a covering letter from the NRGPN, a participant information sheet, consent form, a reply-paid envelope and an anonymous quantitative survey about early retirement, healthy lifestyle, occupational health and work-related factors [10]. All eligible participants received two reminders 2 and 4 weeks after the initial invitation. GPs who returned a completed consent form for participation in the interview component were contacted to arrange a time and a preferred venue for the interview. Consenting GPs who were unable to participate in a face-to-face interview were interviewed by phone (n = 1).
Two interviews were conducted by an occupational health physician, while the remainder of the semi-structured interviews were conducted by Author SP, who is an experienced academic researcher, who was personally known to some of the participants due to her familiarity with the medical professional networks in this geographical area. There were no other people present at the interviews. Interviews lasted approximately 60 min and were mainly undertaken in general practice clinics. A few GPs preferred the interviews to be conducted at a café, their own home, phone or a university location. The semi-structured interview schedule was developed by the authors, and pilot tested with two GPs. Specifically, the questions were asked to explore GPs' perceptions of the factors which hinder and encourage healthy workforce participation, reasons for choosing to work or not until and beyond traditional retirement age, and to explore current retirement pathways amongst GPs. The interviewer showed and briefly explained the WA model to the participants at the beginning of the interview. Interviews were audiotaped and transcribed verbatim and identifying information was removed.
---
Data analysis
NVIVO 10 was utilised to assist with the organisational aspects of the data analysis. A hybrid deductive-inductive thematic analysis approach was used, similar to that described by Fereday and Muir-Cochrane [24]. Initially, two authors independently coded three transcripts and developed a draft coding hierarchy. This was both deductively derived from the WA model [25] as well as inductively generated based on categories arising from in vivo coding. Discrepancies were discussed and the revised coding structure was applied during a first cycle coding of the full data set. This structure was further discussed, refined and expanded by all the authors throughout the coding Fig. 1 Work ability model (Finnish Institute of Occupational Health, 2014) [17] process, allowing the development of a final coding scheme which was applied the data set during a second cycle coding. A thematic map was developed based on the coded data with the WA model as the organising framework. The alignment with this and the identification of aspects which were not explained within this model were discussed amongst the authors and formed the development of a revised model. Thematic narratives were generated.
---
Results
Of the 19 GPs participating, 14 (74%) were male. The average age was 57 years (standard deviation, 12 years). All participants were actively engaged in GP-related employment. Two were in solo practice.
The analyses supported the general applicability of the WA model in understanding SE amongst GPs, in finding that all the elements of the original model were strongly represented in the data. Themes aligned with the model and embodied the fluid and dynamic relationships between the various model components from a general practice perspective. However, in order to provide a more comprehensive reflection on the factors and dynamics found to underpin work ability in this group of GPs, a modification of the WA model was required which entailed the creation of additional subcategories within the model (Appendix). Additionally, new themes relevant to general practice also emerged from the data, which were not reflected in the original model. Hence, the current data set revealed a set of important, new factors and relationships that required additions and refinements to the original model, in order to fully explain sustainable employability in this GP sample. These factors included work-life balance and lifestyle, which both were found to connect to the external and internal environments, the addition of an extended social community to reflect the influence of the wider community within which a GP resides, and the impact of gender.
---
Work-life balance and lifestyle
A desire for work-life balance or to pursue a particular lifestyle was a significant influence on where many GPs chose to reside and work. Intrinsically linked to this theme was family. These themes were dynamically interactive and uniquely functioned as a connection between the external environment and the interior of the WA house and all of its floors. Personality and personal characteristics (first floor) determined the GPs desire to pursue a particular lifestyle, and the ability to work in a particular manner drove their education and upskilling (second floor). A balance between lifestyle and work meant improved physical and mental health (first floor) and consequently improved their attitude to work and the perception of intrinsic benefits such as personal fulfilment (third floor). The influence of lifestyle and work-life balance also affected workload (fourth floor):
…we've tried to make the job fit the lifestyle rather than the other way 'round. (GP1)
---
Distance and travelling
GPs identified that pursuing the lifestyle or work-life balance they desired had a significant impact on the time they were required to spend travelling to work, education or social events. GPs described living close to schools for families, near the beach, on a farm, or in a different town to maintain anonymity, which often lead to increased distance and consequent time spent travelling:
When I stopped working at [a clinic], it was because I was spending two hours driving there. (GP13)
As with the theme of work-life balance, this sub-theme interweaved all of the floors of the WA house, as too much travel was found to negatively impact mental and physical health (first floor), and access to education (second floor) in some cases. Prolonged travel times impacted the workplace due to its influence on time management (fourth floor).
---
Extended social community
The community in which GPs lived had a direct impact on their work. GPs are recognised members of the community. This was found to have either a positive impact on their work experience, with intrinsic rewards (first floor, third floor): …I contribute to the community here, and that's where I'm sort of trying to make my mark (GP2) or a negative impact as community expectations exerted an external pressure to work in a certain fashion (fourth floor):
If you're in private practice you are basically a slave for your community. (GP14) Of consideration for many GPs was the lack of anonymity in a small community, the experience of which was mediated by their personality and personal characteristics (first floor):
…I run into patients around and about…that would become a bit of an issue. (GP9) Finally, the structure of the community itself can influence the type of work a GP is undertaking (fourth floor): …there's a lot of people with depression in our community and that's a major part of their presentation. (GP8)
---
Gender
Gender was a complex theme that emerged throughout the interviews. It appeared to have a multilayered impact on the individual, evidenced by gender's influence on the various floors of the WA model, and also on the GPs' interaction with their external environment and family. Gender also appeared to exert either an amplification or moderation of the influence of family on the WA model.
Female GPs attracted different presenting problems in their practice with more complex and emotionally loaded problems leading to a higher emotional burden for the GP and longer consultations, which impacted work content (fourth floor) and mental health (first floor):
They might have longer visits. Like [GP name] … she doesn't see the volume of patients that a few of us see. (GP8) Some female GPs found that the emotional load from these different presenting problems also necessitated more time for recharge and recuperation, in order to maintain what is perceived as a sufficient level of compassion (first and third floors):
But people tell me as a female GP you get all the tears and smears, and it's the tears that really drain you at the end of the day. (GP9)
The very specific work content that female GPs often encountered meant varying their education and upskilling (second floor) to match the needs of their patient demographic (fourth floor and external environment). Commitments to family (operational environment) also impacted their access to and time for continued education:
My general practice isn't really general practice; it's women's health, which started when I first came here…I never really ever got to do general practice once I had children because the population came for 1001 Pap smears and other things pertaining to women's health. (GP13)
In some cases, male GPs avoided a certain kind of work or patient determined entirely by gender:
'Cause there's female doctors in the practice, I do less Pap smears… (GP6) Female GPs also described using different methods to manage their practice (and work within a practice) to their male counterparts, as they defined themselves differently from men. It appeared that women's identity was still largely tied to their role as a mother and wife: …but as a woman GP, I was not valued-this is a terrible thing to say-I was not valued in my own practice. And the culture in my own practice, which was an interesting practice, was you're playing at general practice. You're a mom; you're playing at general practice. You're not a real GP. (GP13) Hence, it appeared that women's work ability was more strongly impacted by family than men. Some women felt that they were perceived as less committed than their male counterparts (third floor) due to their apparent prioritising of family commitments and the consequential reduction in (paid) work (fourth floor).
---
Discussion
The thematic analysis revealed that much of the data could be broadly categorised according to the WA model, suggesting that the work ability model can be used to explain elements of sustainable employability amongst ageing GPs. Our findings aligned with previous research [26,27] which identified that the areas of particular relevance to rural and regional GPs were workforce support, rural and regional training opportunities, access to continuing professional development, flexibility in practice ownership, family support, and recognition and remuneration. However, in order to provide a more comprehensive reflection on the factors and dynamics found to underpin work ability in this group of GPs required the creation of specific subcategories within the WA model (Appendix). Quantitative analyses [28,29] using the work ability index have previously pointed to health and functional capacity (first floor) presenting the highest explanation rate for continued work ability in older workers, followed by work factors (fourth floor). The current thematic analysis supported these findings for GPs.
In addition to the specific subcategories, which emerged from the data, entirely new factors were identified: work-life balance and lifestyle, extended social community, and gender. Hence, a modified WA model is proposed to more accurately reflect the components of work ability which forms the basis for sustainable employability in GPs. (Fig. 2).
Work-life balance and lifestyle provided important and previously unidentified connections between the WA House and the external environment. Where and how a GP chose to live impacted the various elements of their work ability and vice versa. This finding supported previous research which has alluded to the importance of work-life balance for GPs [11,30,31].
The data also revealed that rural GPs experienced the added dimension of an extended social community not previously included in the WA model. The community in which the GP lived had a direct impact on their work. They felt to be recognised (and 'public') members of the community, which for some had a positive effect on their work experience (intrinsic rewards), while others experienced a negative impact with increased work demands and lack of anonymity. Further research would be beneficial to ascertain whether the significance of the extended social community could be extrapolated to other (health) professional groups.
Gender was the third emergent theme that could be added as an element to the original WA model to explain GPs' sustainable employability. Previous research into the impact of gender on work ability has alluded to women professing greater intolerance to some work challenges earlier than men, especially with regard to physical requirements, but demonstrated greater tolerance for jobs requiring a high cognitive demand compared to men [32]. Women have also been found to be exposed to more unfavourable work conditions than men even when they carry out the same work [32], which corroborates the findings of this study.
Gender impacted every floor of the WA 'House' and had a moderating or amplifying effect on the influence of family. Women GPs were predisposed by their roles as mothers and caregivers to be more likely to modify their work hours to accommodate family commitments compared to their male counterparts. These findings suggest there may be merit in developing gender-specific sustainable employability frameworks.
---
Strengths and limitations
While care should be taken in generalising the current findings to other GP or professional populations, a major strength of this study was the inclusion of a cross section of GPs within a well-defined geographical area of New South Wales. Additionally, the achievement of data saturation and alignment of the current results with previous research lend support to the validity of the findings. Studies of larger GP populations from a variety of regions/communities would provide further empirical validation of this modified model. This study would also be strengthened by the inclusion of GPs not currently engaged in practice, as this may have provided a more accurate picture of the factors contributing to early exit. This should ideally be a focus of further research.
---
Impact of research outcomes
This study has revealed the importance of new factors influencing work ability in GPs. It has provided a more comprehensive model for effectively explaining elements of sustainable employability for GPs in the Northern Rivers region of New South Wales. The model can ideally be utilised as a basis for addressing workforce planning issues and to assist in the design of programmes to promote sustainable employability amongst GPs. For example, the WA model can be used as a conversation tool to raise awareness and teach individuals, such as GPs, which factors relate to their own workability and how these factors may influence their own sustainable employability [33].
---
Conclusions
This is the first study that has empirically tested the WA model in a general practice population using qualitative methodologies. In our study, we tested whether the WA model may lend itself to categorising and exploring factors, which influence sustainable employability. The WA model can be utilised to understand elements of sustainable employability amongst GPs. However, work-life balance and lifestyle, extended social community and gender were aspects of work ability pertinent to GPs which did not form part of the original model and required inclusion to more accurately reflect the components contributing to sustainable employability amongst GPs.
---
Appendix
Themes identified in the current analysis which align with the original WA model.
---
WA model first floor-health and functional capacities
Physical and mental health: while many GPs stated that their work was not physically taxing, physical health and age were identified as significant factors in the hours they worked (fourth floor) and their continuing work ability.
I really don't view retirement as being...a finite point for me, in terms of age, because this is not a physical job… my brain still works. (GP2)
The lack of physical demands highlighted the importance of mental health, of the ability to manage stress, and the intellectual and emotional challenges of the work they encountered. This is closely interlinked with personality and personal characteristics, and with values and attitudes to work (third floor). I think the first obvious one would be my health, which at the moment is good and fine, but of course if that changed dramatically I could have to leave at short notice... (GP11) Working as a GP was recognised as having a direct effect on health, which in turn impacted the ability of the individual to work. GPs identified the positive aspects, like health promotion, which increased their work ability, but also the negative impacts of work-related stress.
Alcohol intake is limited because you're on call, so that's been good-I'd probably have a few drinks every night otherwise, but I only have a drink a couple of days a week now. (GP1) Work and health had a fluctuant and reciprocal relationship which uniquely connected the first floor with the fourth floor in the WA model.
…I know a couple of people who got ill who rightly or wrongly felt that the stress of their job and not looking after themselves well enough [because of the workload] was a key component of them getting unwell… (GP11) Age and physical and mental health are connected to upskilling and continuing education (second floor). An older or overly stressed GP was less likely to have the desire or capacity to pursue further education and enhancement of their skills.
Personality and personal characteristics: the resilience and personality of a GP directly impacted their work ability. It formed the basis of their desire and ability to work as a GP, and interacted with their mental health (first floor). Personality directed motivating factors and intrinsic benefits (third floor) and affected the workplace (fourth floor), shaping interactions with colleagues and patients and attitude to workload.
I think it has a lot to do with personality; I think it's a question of resilience or the lack thereof; so one has to be resilient. I think that personality-wise, doctors generally tend to be fairly obsessional people; they like to get everything just right, and when things don't go just right, well they get nervous and anxious, and in severe cases, get depressed. (GP4)
---
WA model second floor-competence
Upskilling and continuing education: the ability to pursue and practice in areas of special interest was identified as an intrinsic benefit (third floor), and having the opportunity to learn new skills impacts the workplace (fourth floor) where the GP is able to use these skills for continued interest and challenge. A GP's physical and mental health (first floor) influenced their desire and ability to undertake further education and upskilling. Access to and opportunity for continued education and upskilling is therefore an important factor in sustainable employability.
…my other interest is Dermatology… so that I can actually get my skills up and perhaps doing a lot of sun cancer medicine type thing (sic). (GP1) Accessibility to training and education was found to be lacking in some rural centres:
The educational opportunities aren't always great lo-cally… (GP8)
---
WA model third floor-values, attitudes and motivation
Values and attitudes: the individual's attitude toward work and their personal values had a considerable impact on their work ability and linked closely to personality and personal characteristics (first floor). Some older GPs identified a work ethic or attitude that they felt was a product of their era that kept them working. I always intend to work till I die...So I'll be working forever. (GP1) This work ethic kept GPs engaged in their careers and accessing education and upskilling (second floor) to maintain their competency and work ability.
…people don't have to work as hard as we did then… (GP15)
The younger generation of GPs represented a shift in thinking toward a more balanced view of work and lifestyle that was reflected in their work habits, with many working part-time (fourth floor) and putting more emphasis on home and family life (external environment) than their older counterparts. This change in attitude also reflected a wider change in societal values indicating the influence of the external environment on this floor of the WA model. …the young GPs coming through have no intention of ever working full-time. I think that's extremely sensible. (GP7) Intrinsic job benefits: potent motivating factors were the intrinsic rewards that GPs received from working. These included the daily challenge of diagnosing and managing patients (fourth floor) and the opportunity to explore special interests (second floor). I think it [acute hospital work] is a great extension to general practice work. I find that for me you know, it's challenging…I'm not just looking at my general practice day; I'm also looking at my acute hospital days. So it is something, there is a motivational force in a way. (GP2) Furthermore, there was the positive reinforcement of helping a person to change their health for the better, as one GP said: …that's the feedback that you need to keep going; you've got to feel good about what you do… (GP4) This sense of fulfilment and reward aligns with personality and personal characteristics and has a positive connection and impact on mental health (first floor).
Extrinsic job benefits: monetary reward and the consequent ability to live a desired lifestyle or satisfy financial requirements was a commonly identified incentive to continue working as a GP. This connects the external environment factors of family, via the need to provide for them financially, and the work-life balance and lifestyle theme which connects the interior to the exterior of the house of the WA model. Monetary reward also tied in closely with recognition and status (fourth floor), and to education and upskilling (second floor) to enable a GP to pursue more lucrative special interests.
We all want to be paid better, there's no doubt that general practice is the most poorly paid medical specialty… (GP7) One GP spoke in plain terms that financial reward was not just about income for them, highlighting the connection with personality and personal characteristics (first floor): …people need to feel valued…you don't get paid as much…you know, it ranks less. (GP17) Flexibility: within the theme of extrinsic job benefits is flexibility, both in the workplace and in the work the GP is able to do. …flexibility I think is really important to keep people in the workforce. (GP6) Being able to modify work hours or areas of interest was identified as an important consideration in continuing to work in a particular practice or community. This overlaps with many other themes of the WA model. The need to be flexible reflects a personal characteristic, and the satisfaction of this need would positively impact mental health, and potentially physical health if flexibility allowed the GP to engage in more active pursuits (first floor). Flexibility allows for the pursuit of further education (second floor). Finally, flexibility has the most influence on workload, work content and even on work relationships as it indicated the ability to compromise (fourth floor).
Just by flexibility of the workplace, so that if the only time you can get to the gym is between 11 and 12 o'clock, then as long as you're doing your hours, go and do that then. (GP11)
---
WA model fourth floor-work
Workload: The most recurrent and therefore strongest theme identified related to workload. This theme encompassed the hours worked, the additional pressures and stressors of running a practice or the reduction of same through working as a contractor. Excessive workload was frequently identified as a barrier to longevity of working as a GP, while alternatively the ability to be flexible with the hours worked, or to have support in the form of locums and practice nurses, was described as essential to continued working. So I'm working full-time 4 days a week, but also working maybe 2 weekends a month…I also work as a VMO at the local hospital-I see patients every day there. (GP2) Work relationships: many GPs indicated the importance of their interactions with their work colleagues as: an incentive to work (third floor); a way to alleviate work pressures and increased social connectedness (first floor); a way to perform better in the workplace due to added support.
The other GP that I work with [GP name], we get on really well. We've got quite different styles but they work together pretty well. And the rest of the staff, we all get on very well. (GP1) Work relationships therefore appear to have a direct influence on workload, which in turn, as one GP identified, was an important intrinsic benefit (third floor):
I think the number one thing for keeping doctors here is having enough other people to help with the workload. I think that's the most important thing, it's more important than money, it's more important than the actual work environment. (GP1) Work content: having the opportunity to utilise existing skills or to focus on an area of special interest within the scope of their work was an important factor for many GPs in their continuing interest and sense of positive challenge within the workplace. This reflects personality and personal characteristics (first floor) and competency (second floor) as the GP needs to be educated in these skills to be able to practise them. Utilising existing skills or knowledge was also a way for GPs to transition into retirement without having to finish working entirely, for example, as a teacher or mentor for junior doctors and registrars. Finally, there was a sense of personal reward and of recognition of their skills or accumulated knowledge (third floor).
…in an ideal world I'd like to be doing public procedural work (GP3)
Status and recognition: another aspect of continued work ability was identified as the need for recognition of the importance of the GP role within the healthcare system. This was deemed to be especially desired from work colleagues and the wider community, including governing bodies responsible for legislation; …people would say to me, doctors would say 'You're not a real doctor anymore, because you're not doing emergency, hospital anymore' so that kind of attitude is not helpful… (GP6) This theme connects with values and attitudes (third floor) and with personality and mental health (first floor). Recognition and status also reflect the many years of study required to become a competent GP (second floor).
…I also think that GPs are under-rewarded or underappreciated from our own medical community. (GP9) Physical environment: the environment in which the GP practised had an impact on the way they worked and even in their continued desire to work. The size of the practice and access to equipment that facilitated their work or areas of interest was described as an important factor of work ability. …Dermatoscope, otoscopes, you know that they are accredited practices with suitable medical equipment…I would struggle to work for a practice that didn't have that… (GP10) Patient factors: interactions with patients had a significant effect on GPs and their work ability and even on the type of medicine they practised. The additional external environment theme of extended social community connects with patient factors, as particular communities possess a predominance of a certain type of patient (e.g. retirees, farmers, miners) which influences the type of medicine a GP practises. Also patient expectations were identified as a strong influence on how a GP worked, affecting hours worked, on-call availability, and hospital duties.
Yes, once you've got your patient base-their expec-tations…an example of that is a really popular local doctor here, and he really wanted to do the part-time thing, but he had thousands of really dependent patients, and you know, and his personal ethic is always to work really hard, and then when he didn't want to work hard there was this sort of [backlash] (GP2) Patient factors also overlap with intrinsic benefits (third floor), as one GP said:
...I have good relationships with the patients and I get a lot of enjoyment out of that. (GP3)
Positive patient interactions also impact mental health (first floor). Finally, GPs pursued further education and upskilling to meet the needs of their patient base (second floor).
Human resources: closely associated with workload and workplace interactions was the availability of human resources to support GPs in the workplace. This included locum support to enable GPs to take leave or to distribute their workload:
We've had a lot of registrars come through in the past; none of them really wanted to stay… (GP1)
The inclusion of practice nurses and allied health practitioners helped coordinate care and further share some of the burden of patient management. I've worked in just about every capacity I can think of, from solo up, but since this concept of team care, and having practice nurses, and practice managers has started, we're no longer a harassed individual in a single room, trying to solve the world's problems on its (sic) own. (GP7)
---
WA model-operational environment
Family: the GP's family was consistently identified as one of the most powerful factors in work ability. Family affected almost every level in the WA model house. Family pressures or rewards had a direct impact on stress and mental health, as well as physical health, for example through physical activity with the children (first floor).
Family constraints influenced the ability for a GP to have time and access to further education and training (second floor), or alternatively influenced the type of training as the GP structured their work around their family. Financial security for family was an extrinsic benefit and motivating factor (third floor). Finally, family influenced hours worked and even location of the workplace (fourth floor). ...My youngest is 15 and will leave or finish school in 3 years. And at that stage we plan to leave [inland village], not because of the job but because of those things. (GP1) Social community: encompasses relatives, friends, and acquaintances. GPs identified the overlap between the work they undertook and the relationships they formed within their community. It correlates with intrinsic benefits and motivating factors (third floor) and physical and mental health (first floor).
I have a medical role with those, with the <sporting team>, and you know, I've got good friends in this area. (GP8)
---
WA model-society
Politics: the 'red-tape' and paperwork associated with general practice was identified as a significant negative factor in work ability, impacting motivation (third floor), and workload (fourth floor).
…I think it's a pain, and I would think it would put people off being a general practitioner. We're just like scribes. (GP15) …every time the government gets into, does something, it makes the administrative work more. They say that they cut the red tape down, and they've made it longer. (GP5) Government policies and bureaucracy were often seen as counterproductive and even antagonistic: I think the government…when it suits them they love GPs and when it doesn't suit them they attack them. (GP11) There were reciprocal effects on the mental health, and personality and personal characteristics (first floor) played a part in how strongly GPs felt impacted by politics and whether they took a political role for themselves.
Operational factors: since the global financial crisis (GFC) some GPs found that the external impact on their finances (third floor) had affected their workload (fourth floor). …I've spoken to the GPs who've said they need to stay in [the workforce] longer to top up their super... (GP4) Financial independence (third floor) was identified as a reason a GP might retire early from work: …they retired at 60 because they had that nice, juicy index pension, so they're comfortably off. (GP12) GPs also spoke of the lack of flexibility (third floor) in reducing their hours, or changing to a teaching role (fourth floor), as a deterrent to continuing working. ...just your medical indemnity insurance, that if you're working less hours it needs to step down…They're [governing bodies] addressing it… (GP4) Support: GPs reported both positive and negative experiences with regard to support from external bodies, for example the Royal Australian College of General Practitioners (RACGP). Support was beneficial in the areas of providing locums, staff, practice infrastructure, and advocating for GPs politically (fourth floor). GPs indicated that support from advocating bodies had one of the lowest levels of influence on their work ability.
---
Availability of data and materials
The data are not publicly available due to them containing information that could compromise research participant privacy/consent.
---
Abbreviations CPD: Continuing professional development; GP: General practitioner; NRGPN: Northern Rivers General Practice Network; SE: Sustainable employability; USA: United States of America; WA: Work ability Authors' contributions Study concept and design was led by SWP, with contribution by VH. Data was collected by SWP. The qualitative analyses was conducted by JS, with significant contributions by VH and SWP. All authors contributed to the writing, read and approved the final manuscript.
---
Ethics approval and consent to participate
Ethics approval was received from University of Sydney Human Research Ethics Committee (#14112) and The University of Wollongong Human Research Ethics Committee (#GSM15/004).
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Background: Socioeconomic inequalities in mortality pose a serious impediment to enhance public health even in highly developed welfare states. This study aimed to improve the understanding of socioeconomic disparities in all-cause mortality by using a comprehensive approach including a range of behavioural, psychological, material and social determinants in the analysis. Methods: Data from The North Denmark Region Health Survey 2007 among residents in Northern Jutland, Denmark, were linked with data from nationwide administrative registries to obtain information on death in a 5.8-year follow-up period (1 st February 2007-31 st December 2012). Socioeconomic position was assessed using educational status as a proxy. The study population was assigned to one of five groups according to highest achieved educational level. The sample size was 8,837 after participants with missing values or aged below 30 years were excluded. Cox regression models were used to assess the risk of death from all causes according to educational level, with a step-wise inclusion of explanatory covariates. Results: Participants' mean age at baseline was 54.1 years (SD 12.6); 3,999 were men (45.3%). In the follow-up period, 395 died (4.5%). With adjustment for age and gender, the risk of all-cause mortality was significantly higher in the two least-educated levels (HR = 1.5, 95%, CI = 1.2-1.8 and HR = 3.7, 95% CI = 2.4-5.9, respectively) compared to the middle educational level. After adjustment for the effect of subjective and objective health, similar results were obtained (HR = 1.4, 95% CI = 1.1-1.7 and HR = 3.5, 95% CI = 2.0-6.3, respectively). Further adjustment for the effect of behavioural, psychological, material and social determinants also failed to eliminate inequalities found among groups, the risk remaining significantly higher for the least educated levels (HR = 1.4, 95% CI = 1.1-1.9 and HR = 4.0, 95% CI = 2.3-6.8, respectively). In comparison with the middle level, the two highest educated levels remained statistically insignificant throughout the entire analysis.Socioeconomic inequality influenced mortality substantially even when adjusted for a range of determinants that might explain the association. Further studies are needed to understand this important relationship. | Background
Socioeconomic inequalities in mortality have been observed in several high-income countries [1][2][3][4][5][6][7]. This is revealed not only when comparing the most advantaged and the most disadvantaged social groups-a gradient can be observed across the entire socioeconomic hierarchy [1][2][3]6,8]. In Denmark, with its relatively low economic inequality, a high level of income protection and universally tax-financed healthcare, the past twenty years have seen increasing inequality in mortality [1]. This poses a serious challenge to public health [1][2][3]8], as reflected by the priority given by World Health Organization (WHO) to the social determinants of health in its draft for the 12th general work programme for 2014 -2019 [9]. Providing for equality in health is a moral obligation, as both Mackenbach and Marmot have emphasized [8,10]. Despite a broad recognition of the importance of this subject the reasons for these disparities are still unknown [1][2][3]8,11,12]. It is crucial to obtain a comprehensive understanding of their underlying causes, as this is vital to prevent the persistence of the disparities [2,11,13]. Sociological theory explains health disparities by social stratification comprised of three components. Firstly, mobility mechanisms that place individuals into social strata causing differences in the personal characteristics of individuals between strata. Secondly, allocation rules causing differences in distribution of resources to social strata resulting in inequalities between social strata in access to material and immaterial resources. Thirdly, social processes that render some resources of greater value than others, i.e. resources that can be used to avoid health problems [8]. Additional theories can be related to the social stratification perspective. The theory of "fundamental causes" suggests that social forces underlying the social stratification induce health disparities as opposed to the proximal risk factors such as smoking, drinking and eating habits. Distal resources such as knowledge, money, power, prestige and beneficial social connections, that can be applied to enhance health, are distributed differently among social strata [8,14]. Health disparities may also arise from health-related selection during social mobility i.e. individuals are sorted into social classes based on health or psychosocial determinants as stipulated by the "social selection" theory [8,15]. The "Neo-materialist" theory propose that disparities in material recourses remain in welfare stats despite of relatively small income inequalities, and what remains is still substantial for health disparities, partly because material disadvantage is associated with lifestyle diseases resulting from poor health-related behaviours, such as lack of physical exercise and unhealthy diet etc [8,15]. Unequal distribution of psychosocial determinants such as psychosocial stress, lack of sense of control and social support may also be of importance in the explanation of health inequalities, as suggested by the "Psychosocial" theory [8,15]. Moreover the theory of "Diffusion of innovations" emphasizes that health disparities result from faster adaption of new healthy behaviours and earlier pick up of interventions among individuals with a higher socioeconomic status [8]. None of these theories are mutually exclusive, and they may be apparent simultaneously and reinforce each other [8,15]. Researchers have thus proposed various theories on the persistence of health inequalities in welfare states [8], from which potential pathways underlying the inequalities have been developed including behavioural, psychological, material and social mechanisms (Table 1) [11][12][13]16,17].
These mechanisms may, both independently and in combination, by reinforcing each other, influence the socioeconomic gradient in mortality [11,13,17]. It is crucial to focus on the identification of determinants that may explain the socioeconomic inequalities in mortality [1][2][3]8,11,13]. Studies have investigated the impact of behavioural determinants on the association between socioeconomic position and mortality. These found, that the association was substantially accounted for by adjustment for health-related behavioural determinants [16,[18][19][20][21][22][23]. In addition, only few studies have combined the study of behaviours in combination with study of
---
Table 1 Potential mechanisms underlying socioeconomic inequalities in all-cause mortality
---
Behavioural mechanisms Psychological mechanisms
Differences in socioeconomic strata in terms of health-related behaviours and lifestyles, including smoking habits, alcohol consumption, exercise and dietary patterns as well as morbid obesity [8,15].
Disparities in personality profile and psychological resources, such as cognitive ability, knowledge, cooping abilities, attitude, a sense of control and perceived social standing. The personality profile is believed to be a determining factor for the socioeconomic position, as educational and occupational achievements are dependent on personal talent and effort [8]. Furthermore, psychological stress is hypothesized to increase the risk of premature mortality by producing disruptions in the neuroendocrine system [8,15].
---
Material mechanisms Social mechanisms
Unequal distribution of material resources such as income, but also what income enables i.e., being able to afford healthy food, access to goods and services, favourable living and housing conditions, employment status, service provision such as schools and transport and welfare to population health [8,15].
Stratified difference in social resources such as social relationships, social support, interpersonal trust, norms of reciprocity and mutual aid, power and prestige [8,15].
material or psychosocial determinants [11][12][13]17,24,25], possibly because data on the composition of social strata or on the distribution of immaterial determinants among social groups can be difficult to obtain [8]. This study linked data from The North Denmark Region Health Survey 2007 with individual-level data obtained from nationwide administrative registers. The self-administrated health survey obtained information on demographic characteristics, lifestyle factors, disease, quality of life, work characteristics, social support etc. [26].
---
Aim
The aim of our study was to explore whether behavioural, material, psychological and social determinants could explain the association between educational status and all-cause mortality. This was done by investigating the separate and mutual effect for each group of determinants.
---
Methods
---
Design and population
A register-based cohort study of inhabitants in the Danish Region of Northern Jutland was conducted with a follow-up period from the 1st February 2007 to 31st December 2012. The participants had previously answered a postal questionnaire, sent to a sample of 23,491 citizens in Northern Jutland, Denmark, aged 16-80 years drawn randomly from a population of 438,759 inhabitants in the Civil Registration System [26,27]. The sample was stratified by the region's 11 municipalities. Two reminders were sent to citizens who had not returned the health survey [26]. For the current study only participants aged ≥30 years were included, as final educational status was considered to be acquired at this age. This excluded 1,266 participants aged 16-29 years, leaving a total of 10,231 participants. The response rate was 51.79% (49.90% men) among subjects aged ≥30 years. Information on educational status was missing for 125 (1.22%) participants, reducing the study population to 10,106 subjects. Only participants with no missing on all of the independent variables were included in the final sample, resulting in 8,837 subjects.
---
Socioeconomic status
A conceptual challenge exists in defining socioeconomic position on an individual level. Often used measurements include educational status, income or occupational class [25,28]. It has been demonstrated that these factors cannot be used interchangeably as they are related to different causal processes [28]. In this study, educational status served as a proxy for the participants' socioeconomic position, as this is a fundamental determinant of both occupation and income [28]. Information on individuals' highest completed course of education was obtained from the Population's Education Register.
The register only provides information on education authorised by the Danish Ministry of Education and of a duration of more than 80 hours [29]. Based on the International Standard Classification of Education (ISCED 2011) [30], we grouped participants according to their highest completed education, but deviated from the ISCED classification for the fourth level, i.e., postsecondary non-tertiary education, as no such programmes exist in Denmark [31]. Instead, programmes at ISCED level 3 were split into two, resulting in five groups (A-E):
A. Early childhood education, primary education and lower secondary education (ISCED levels 0-2) B. General upper secondary education, high school programmes (ISCED level 3) C. Vocational upper secondary education, vocational training and education (ISCED level 3) D. Short or medium-length higher education, first-cycle programmes tertiary education, bachelor or equivalent (ISCED level 5-6) E. Long length higher education, second-cycle programmes, Master's or equivalent, or Third-cycle programmes Doctoral, PhD programmes or equivalent (ISCED levels 7-8)
---
All-cause mortality
Information on all-cause mortality was obtained from the Civil Registration System [27]. Time to death was measured by days from the time of receiving questionnaire the 1st of February 2007 until death, emigration or end of follow-up, right censoring on December 31st 2012, resulting in a 5.8 year follow-up period.
---
Demographic information and health status
Demographic information on age and gender was gathered from Civil Registration System [27]. Based on information from the National Patient Registry co-morbidity was measured using the Charlson index [32]. The register holds information on all admissions and outpatients visits to Danish hospitals and specialty clinics. All admissions and visits were registered by a primary diagnosis and, if appropriate, one or more secondary diagnoses, according to the International Classification of Diseases, 10th Revision [33]. Objective health was assessed based on Charlson co-morbidity scores, a weighted index that takes into account the number and the seriousness of co-morbid diseases. Each condition was assigned a score of 1, 2, 3, or 6, depending on the associated risk of dying [32]. The objective health variable was formed based on the Charlson co-morbidity scores 0, 1, 2 and ≥3 (Additional file 1: Table S2a). Information on subjective self-reported health was obtained from the health survey by the global question "In general how do you assess your current health?" with five response options ranging from Very good to Very poor, and a Don't know response [26] The subjective health variable was formed on the basis of the response options (Additional file 1: Table S2a).
Health-related behavioural, material, psychological and social determinants
Our study included a range of determinants in the explanation of the association between educational status and all-cause mortality. Based on the underlying mechanisms, the explanatory variables were divided into four groups: behavioural, material, psychological and social determinants, as shown in Table 1. Information on explanatory determinants was obtained from the health survey [26] and the Income Statistics Register [34].
The exact wording of the questions and the matching response options used in the self-reported questionnaire are shown in the (Additional file 1: Tables S2b-e).
---
Behavioural determinants
The behavioural determinants included smoking patterns, alcohol intake, Body Mass Index (BMI), dietary and exercise habit. Behavioural variables were formed on the basis of the response options (Additional file 1: Table S2b) with the exception of alcohol intake and BMI. Alcohol intake was estimated according to Danish Health and Medicines Authority recommendations on risk behaviours [35], which are based on the respondents' weekly consumption of units (Additional file 1: Table S2b). Participants were categorized into three groups based on consumption and gender, i.e. (women/men) low-risk alcohol intake (<7/<14 units per week), moderate-risk alcohol intake (7-14/14-21 units per week), and high-risk alcohol intake (>14/>21 units per week) [35]. The participant's BMI was calculated from information on weight and height (Additional file 1: Table S2b). A standard classification of BMI was used, <18.5 (underweight), 18.5 -24.9 (normal weight), 25-29.9 (overweight), 30 -35 (obese I) and >35 (obese II) [36].
---
Psychological determinants
The psychological determinants included feeling stress, anxiety, nervousness, restlessness, hopelessness, unhappiness, feeling depressed and having too many worries. Psychological variables were formed on the basis of the response options (Additional file 1: Table S2c).
---
Material determinants
The material determinants included profession, income, residential area, type of residence, residential ownership, difficulty paying bills and use of neighbourhood facilities.
With the exception of information on income, the material variables were formed on the basis of the response options (Additional file 1: Table S2d). Income information was obtained from the Income Statistics Register, which includes information on all tax return forms, thus covering all economically active citizens [34]. To obtain stable measures for household incomes and individual incomes, an average of income in three successive years (2004, 2005 and 2006) was calculated. Three groups were formed for both household and individual incomes; low, average and high income.
---
Social determinants
The social determinants included time spent with family or friends, being able to count on others for help, loneliness, trust and reciprocity, marital status, use of cultural facilities, social involvement in the local community and association activities. Social variables were formed on the basis of the response options (Additional file 1: Table S2e).
---
Ethics
The study was approved by the Danish Data Protection Agency (Ref.GEH-2014-014). All data were linked and stored in computers held by Statistics Denmark and made available with de-identified personal information to ensure that individuals could not be identified. In accordance with the Act on Processing of Personal Data only aggregated statistical analyses and results are published [37,38]. Retrospective anonymized register-based studies do not require obtained written informed consent and ethical approval [37,38].
---
Statistical analyses
For descriptive statistics, continuous variables were compared with Analysis of Variance (ANOVA) tests and discrete variables with Chi-square (Chi 2 ) tests to test for difference between groups. Comparison of survival was performed with Proportional Hazards Cox Regression models. Time-on-study was used as the timescale. Hazard ratios (HR) and the corresponding 95% confidence intervals (95% CI) were determined. Educational level C, vocational upper secondary education was chosen as the reference group on the basis of size, as this group was the largest of the five educational levels. Analyses were performed in three preselected steps; initially, a calculation using a model adjusted for age and gender was performed (Model 1), followed by a calculations allowing for further adjustment for objective and subjective health (Model 2). A third model allowed additional adjustment for selected behavioural, psychological, material and social determinants (Model 3), with a step-wise inclusion of variables. Subjects were censored at the end of the follow-up period (31st December 2012). Analyses were conducted applying a design weight to correct for sample selection bias, as respondents in the different municipalities did not have equal chances of receiving the questionnaire. The proportional hazards assumption and the linearity assumption of the proportional hazards Cox regression model were tested and found to be valid. Schoenfeld's residuals were used to test the proportional hazards assumption. We examined possible interactions between gender and educational status, and potential interactions between age and educational status. No statistically significant interactions were detected. The level of statistical significance was set at a p-value <0.05 for all statistical analyses. To detect whether excluding subjects with missing values on any of the independent variables would bias the results we performed a sensitivity analysis conducting the multivariable analyses on the full sample (n = 10,106), i.e. using all available data in the different models, (Additional file 2: Figure S3a, Additional file 3: Figure S3b). All data management were performed using SAS software, version 9.4 (SAS Institute Inc., Cary, North Carolina, USA) and all analyses were executed using R Studio software, version 0.97.551 (R Studio, Inc. ©2009-2012, part of the R statistical software package, version 3.0.2, Development Core Team).
---
Results
---
Participants' characteristics
Subjects' mean age was 54.1 years (SD 12.6); 45.3% were men (n = 3,999). Additional file 1: Table S2 gives in condensed form information on baseline characteristics of the study population by educational status. The total distribution of demographic and all explanatory variables according to educational level can be found as Additional file 1: Tables S2a-e, (in the additional files).
---
Educational level A
Participants, whose highest education was primary school, were at baseline characterized by a high average age (59.6 years (±12.0)), a high proportion of death (7.4% (n = 207)), co-morbidity (3.8% (n = 107)), poor self-rated health (43.2% (n = 1,213)) obesity (15.3% (n = 429)) and were smokers (28.9%(n = 813)). Many had low income (51.1% (n = 1,434)), were tenants (17.4% (n = 489)) and flatdwellers (9.4% (n = 265)) and were pensioners or on early retirement (49.9% (n = 1,507)). Use of community house or centre (9.7% (n = 272)) and clubs for older people (14.3% (n = 401)) were also prevalent in this group.
---
Educational level B
Respondents, with general upper secondary education, were characterized by lower average age (45.9 (±10.4)), a high prevalence of co-morbidity (2.1% (n = 5)), stress (64.5% (n = 151)) and difficulties with paying bills (4.3% (n = 10)).
---
Educational level C and D
Respondents, with vocational upper secondary education and short-to-medium higher education, respectively, were non-diverse in terms of baseline characteristics compared to the other educational levels.
Educational level E Among respondents with a long higher education, high incomes were prevalent (77.1% (n = 273)), as was the use of neighbourhood facilities such as parks (49.4% (n = 175)), cinemas and theatres (17.8% (n = 63)). This group participated in association activities (46.3% (n = 164)) and spent less time with family (44.6% (n = 158)). Many were non-smokers (84.5% (n = 299)); alcohol consumption was high (20.3% (n = 72)) and self-rated health was good (81.4% (n = 288)).
---
Unadjusted and adjusted risk of all-cause mortality
In the 5.8-year follow-up period, 395 (4.5%) deaths occurred. All-cause mortality was unevenly distributed across educational levels; significantly more deaths occurred in the least educated groups (p < 0.001). Using multivariable Cox regression models with adjustment for confounding by age and gender, we found that the risk of mortality was significantly higher among respondents on levels A and B (Figure 1, model 1), (HR = 1.49, 95%CI = 1.20-1,84 and HR = 3.71, 95% CI = 2.35-5.87, respectively). The midmost level C, was chosen as reference group. In comparison with level C, no
Figure 1 Hazard ratios and 95% confidence intervals for educational status calculated by Cox regression models on complete cases.
statistically significant difference was observed between the highest educational levels, D and E, (HR = 1.23, 95% CI = 0.79-1.92 and HR = 1.10, 95% CI = 0.59-2.04, respectively). Further adjustment for the effect of objective and subjective health (Figure 1, Model 2) resulted in comparable patterns for levels D and E, (HR = 1.26, 95% CI = 0.83-1.91 and HR = 1.10, 95% CI = 0.56-2.17, respectively). The higher risk of mortality also remained statistically significant for the respondents with shortest schooling, levels A and B (HR = 1.35, 95% CI = 1.08-1.68, and HR = 3.52, 95% CI = 1.97-6.29, respectively). The inequality among groups failed to disappear when adjustment was made for the effect of behavioural, psychological, material and social determinants (Figure 1, Model 3). The risk of mortality thus remained significantly higher among levels A and B (HR = 1.42, 95% CI = 1.08-1.86 and HR = 3.98, 95% CI = 2.33-6.78, respectively). Statistical differences remained statistically insignificant when comparing levels D and E with level C, (HR = 1.55, 95% CI = 0.93-2.58 and HR = 1.55, 95% CI = 0.75-3.18, respectively). The effects of each group of determinants are shown in an additional file, (Additional file 2: Figure S2). Model 4 of this figure corresponds to a basic model with adjustment for the confounding effect of age and gender along with objective and subjective health. Further adjustment for the effect of behavioural determinants (Additional file 4: Figure S2, Model 5), the risk of mortality remained significantly higher on levels A and B, (HR = 1.35, 95% CI = 1.08-1.68, and HR = 3.56, 95% CI = 1.85-6.83, respectively). Similar results were found after additionally adjustment for the effect of psychological determinants (Additional file 4: Figure S2, Model 6). The mortality risk remained significantly higher on levels A and B (HR = 1.36, 95% CI = 1.09-1.71, and HR = 3.56, 95% CI = 1.92-6.61, respectively). After further adjustment for the effect of material determinants (Additional file 4: Figure S2, Model 7), the risk of mortality remained significantly higher on levels A and B (HR = 1.39, 95% CI = 1.10-1.75, and HR = 4.62, 95% CI = 2.29-9.36, respectively), when comparing with level C. The mortality risk did not show significant changes for levels D and E, although increased risk was noted (HR = 1.62, 95% CI = 0.91-2.87 and HR = 1.46, 95% CI = 0.56-3.78, respectively). Neither did additional adjustment for the effect of social determinants (Additional file 4: Figure S2, Model 8) affect the risk of mortality, which remained significantly higher on levels A and B, (HR = 1.42, 95% CI = 1.08-1.86, and HR = 3.98, 95% CI = 2.33-6.78, respectively).
---
Sensitivity analysis
The analysis on the full analytical sample (n = 10,106) using all available data in the different models produced results similar to those of the main analysis on complete cases (n = 8,837), (Additional file 3: Figure S3a and Additional file 4: Figure S3b in the additional files).
---
Discussion
Our study examined whether behavioural, psychological, material and social determinants could explain the association between socioeconomic status and all-cause mortality. The risk of mortality was found to vary across educational levels and to be significantly higher for respondents from the lower socioeconomic strata. Adjustment for behavioural, psychological, material and social determinants failed to eliminate the effect of the inequalities, as the risk remained significantly higher for the two groups with lowest educational levels (A, primary education and B, General upper secondary education) when compared with the midmost educational level (C, vocational upper secondary education). Surprisingly, no clear gradient in socioeconomic inequality as measured by educational achievement could be detected, as we found no statistically significant difference between the second-highest and the highest educational levels, when compared with the midmost educational level.
---
Strengths and limitations
Our study derives part of its strength from our comprehensive approach apparent from the inclusion of healthrelated behavioural, psychological and material as well as social determinants. Furthermore, the use of educational status as a proxy for socioeconomic position offered the double advantage of relative stability over a lifespan, and the ease of retrieval and recording. The risk of selection bias was minimized by the choice of educational status, which introduces less reverse causation than its alternative, occupational class and income, as mobility of individuals with poor health into certain strata is less likely to be effected by differences in educational level. Selection bias is more likely to influence health differences by occupational class and income, as occupation and income tends to decrease when an individual become chronically ill [28]. The independent effects of occupation and income were moreover taken into account by making separate adjustments for the effect of each dimension [28]. The use of all-cause mortality as the outcome measure had several advantages as this endpoint requires no further ascertainment than the time of death, thus preventing bias stemming from the classification of cause of death.
The criterion for selection of registers was content validity weighed against the quantity and relevance of the data. The accessibility, location and time covered by the register data were also considered [37]. Overall, the data obtained from registers was considered to be of high-quality information [27,29,33,34,37]. Among the limitations of the study are some unexpected irregularities in educational data. These occurred as a consequence of several changes in the educational system over the years; hence, data from before 1974 and for immigrants with no Danish schooling are self-reported, which increases the likelihood of misclassification [29]. Furthermore, income data may be biased by the impact of undeclared work, etc. [34]. Data obtained from the regional health survey may be biased as to selection, because of the non-response rate (48.2%), exactness of information obtained as well as missing values. Analysis on the full sample size, using all available data in the different models, did however not change the study results. As the outcome measure of all-cause mortality involves all causes of death, an uneven distribution across the groups cannot be ruled out. As the outcome measure all-cause mortality has the disadvantage of being a concept including many possible causes of death, which may be distributed differently across socioeconomic strata. The measure furthermore represents a combination of the effect of disease incidence, access to treatment, and survival. Hence, the observed inequalities may at least partially be due to disparities in survival after disease incidence or in the distribution of more lethal diseases. Thus, caution should be exercised when interpreting the results, as determinants of prolonged survival might not be the same as for disease incidence and treatment access. Data obtained on the explanatory determinants was self-reported and several variables were proxies. Objective measures and more detailed questions may have yielded a more accurate estimation of the contribution of the various determinants. Other explanatory determinants may have been needed in order to explain the association. Furthermore, exploring the association between educational level and all-cause mortality has methodological shortcomings. This approach does not allow for a causal interpretation of the observed changes in hazard ratios and can lead to an underestimation of the effect of the determinants and an overestimation of the effect of educational status on the association [39]. Additionally, we obtained information on many possible confounders such as age, gender and co-morbidity, but despite adjustment for the most relevant ones residual effects may be present as the design does not eliminate unmeasured confounders that could possibly affect the results. The response rate was less than 52% and the rate was particularly low among young men. Low response rates in some subgroups of the study population pose problems in the representativeness of the study population with the background population. Previous studies have shown a tendency to higher response rates among higher educated subjects [40] and lower mortality rates among participants than non-participants [41,42]. Hence it is possible that the contrast in educational level and healthrelated determinants observed in the study population were underrated in proportion to the general population. For these reasons caution should be taken when generalizing the results to the general population. Caution in interpretation is also warranted because of the limited number of deaths occurring in a study population and follow-up time of this magnitude.
---
Interpretation
The study gave evidence of substantial inequality in allcause mortality among the citizens of Northern Jutland, Denmark, as significantly more subjects from the lower socioeconomic strata died in the study period. Our results are similar to those found in comparable studies investigating the distribution of all-cause mortality among socioeconomic strata [1][2][3][4][5][6][7]. The risk of mortality was significantly higher on the second lowest educational level, where the average age was lower, thus the cause of death may have been different from those on the other educational levels [43]. Different causes such as the use of health benefits, or coping skills may therefore have been involved. The mortality risk remained significantly higher for respondents on lowest socioeconomic levels, which could possibly be explained by greater exposure to a wide range of risk factors for poor health over the life course. They may moreover have become more homogeneous regarding personal characteristics of significance to health, such as cognition, knowledge, material means, social support and health-related behaviours, as these disparities over time may lead to differences in risk factor profiles and vulnerability to such risk factors across socioeconomic strata [17]. A better understanding of the association between socioeconomic status and allcause mortality is necessary to reduce the socioeconomic inequalities in mortality, which we were unable to explain by adjusting for behavioural, psychological, material and social determinants. While our results support comparable work [12,16,25], other studies have concluded that the gradient in all-cause mortality is explainable after adjustment for material determinants, either on their own or in combination with behavioural and psychosocial determinants [11,13,17]. In our study, adjustment for the effect of material determinants did increase the mortality risk on the lowest socioeconomic levels, but we found no indication of a strong effect of material determinants on all-cause mortality. However, unequal access to material resources may lead to differences in life circumstances in youth, ultimately resulting in lasting disparities in health. A life-course perspective focusing on fundamental causes, distal factors and habitus [14,44,45], therefore seems required to explain socioeconomic inequalities in all-cause mortality. Our study and previous studies are based on a causation theory, explaining inequalities in mortality by stratified differences in health determinants, thus an overlap of potential mechanisms should be considered in explaining the socioeconomic inequalities. None of these mechanisms are mutually exclusive; the different mechanisms could thus be interrelated, thereby challenging our ability to account for the effect on the socioeconomic gradient in all-cause mortality [11,13,17]. These reverse causalities can be categorized as measurement errors leading to possible bias in estimates [12]. Our simplified study models cannot account for the multiple pathways underlying the inequality, i.e. the methods applied were unable to account for the possible multiple mechanisms underlying the inequalities, as only the association between educational level and all-cause mortality was assessed. To prevent a development towards stronger disparity, further exploration into these complex issues is needed. We need to understand why an underprivileged socioeconomic position places people at higher risk of death than their better-off compatriots. Further exploration of the possibility that the underprivileged groups form a homogenous group is needed, as our data may have given an insufficient description. A life-course perspective to explain the persistent inequalities in allcause mortality seems necessary for progress, as we believe this perspective is crucial to allow for the multiple mechanisms and pathways and to account for the inequality in all-cause mortality.
---
Conclusion
This study has demonstrated the existence of substantial inequality in all-cause mortality among citizens of Northern Jutland, Denmark. Despite the comprehensive approach, with incremental adjustment for the effect of a range of determinants, we were unable to account for the inequality revealed by the data. Uncovering the multiple underlying pathways may require less simplified models. We recommend that future research takes a life-course perspective that includes distal factors while simultaneously accounting for the complexity of the underlying multiple mechanisms and pathways to explain the association between socioeconomic status and all-cause mortality.
---
Additional files
Additional file 1: Table S2a. Baseline demographic characteristics, by educational level. Table S2b. Baseline behavioural characteristics, by educational level. Table S2c. Baseline psychological characteristics, by educational level. Table S2d. Baseline material characteristics, by educational level. Table S2e. Baseline social characteristics, by educational level.
Additional file 2: Figure S3b. Hazard ratios and 95% confidence intervals for educational status estimated by Cox regression models on complete cases (n = 8,837).
Additional file 3: Figure S3a. Hazard ratios and 95% confidence intervals for educational status estimated by Cox regression models on the full study sample -using all available data in the different models.
Additional file 4: Figure S3b. Hazard ratios and 95% confidence intervals for educational status estimated by Cox regression models on the full study sample -using all available data in the different models.
---
Competing interests
The authors declare that they have no competing interests.
Authors' contributions CO, CTP and LRBUC conceived the concept for the study and are responsible for its design. LRBUC carried out the data management process and statistical analyses with help and advice from CTP, RNM and LEJ. LRBUC drafted the manuscript. CTP, CO, HVN, KF, HB, RNM, LEJ, SRJK and SMH contributed to interpretation of data. All authors have critically revised the text for important intellectual content and have read and approved the final manuscript and are accountable for all aspects of the work. |
In this article, I integrate symbolic threat dynamics into a theoretical discussion of religious change. Specifically, this article demonstrates how symbolic threat can lead to increases in salient collective characteristics among members of the threatened group. To make this case, I examine the religious and historical idiosyncrasies of East and West Germany. In the context of East Germany, I find a dramatic reduction in religious activity among the right-wing between 1999 and 2017, as well as a strong relationship between secularity and fear of foreign domination. Mediated by the deeply atheistic history of East Germany, secularization is here presented as a reaction of eastern identification that repeatedly emerges in the face of cultural threat. To empirically illustrate my theoretical contentions, I rely on survey data from the European Values Study (EVS) and German General Social Survey (ALLBUS). | Introduction
Theories of religious change have been caught between two opposing paradigms: secularization theory and religious economy theory. In its strongest form, secularization theory argues that religion erodes as modernization proceeds (Berger 1967;Wilson 1982), though the metrics of modernization vary by secularization theorist (Bruce 1999; Norris and Inglehart 2004). Religious economy theory, on the other hand, contends that religious deregulation leads to increased religious pluralism and, in turn, higher levels of religious activity (Stark and Iannaccone 1994;Stark and Finke 2000). For reasons to be discussed, however, neither of these theories can make sense of the religious change with which this work is concerned. Anomalous cases do not create cause for abandonment of the theories that fail to explain them, but rather present an opportunity for theoretical complication and improvement. With a focus on symbolic threat dynamics, I here provide a new theoretical approach to understanding religious change.
Theories of religious change have not adequately considered the impact cultural threats have on identity formation and, likewise, group threat theories have paid scant attention to processes of religious change. In this article, I contribute to both literatures by integrating the effects of cultural threat into a theoretical discussion of religious change. Central to this integration is the understanding that salient symbolic traits of a group's identity (which could fall into religious, racial, cultural, and linguistic categories) emerge in the face of a cultural threat. While the potential utility of symbolic threat has been considered in discussions of religious identification (e.g., Stark and Finke 2000;Bebbington 2012), this factor has yet to be systematically incorporated into any theory of religious change. Furthermore, speculation concerning the connection between symbolic threat and religious change has mostly revolved around situations of religious revival, whereas this article uses this dynamic to explain a case of religious decline. This work will use the contextual particularities of East and West Germany to inform the more general discussion of how threat relates to processes of identity formation. In this article, secularity will be understood as a central facet of the East German character. Throughout the reign of the German Democratic Republic (GDR), pushing a secular worldview upon East German consciousness was an important objective for the communist regime (Stolz et al. 2020). Mediated by the deeply atheistic history of East Germany, secularization is here presented as a reaction of eastern identification that repeatedly emerges among East Germans who perceive cultural threats from outsider groups. The case of East Germany demands theoretical complication of this kind, for it has demonstrated exceptional patterns of religious decline.
The secularization of East Germany shortly after German reunification presents an empirical "puzzle" to sociologists of religion, for neither secularization theory nor religious economy theory can explain this period of religious change. With the fall of the Soviet Union and consequent deregulation of religion, a religious revival was observed among the vast majority of Eastern Bloc nations in the 1990s (see Table 1). These spikes in religiosity that came along with religious deregulation clearly align with the claims of religious economy theory. Anomalous to this pattern, however, East Germany became increasingly secular in the years following reunification (Pollack 2002; see also Table 1), and has since become the least religious society in the world (Froese and Pfaff 2005;Smith 2012). 1 This curious religious decline is also no triumph for proponents of secularization theory, for the steep drops in religiosity within the mere six years presented in Table 1 certainly do not correspond to the extent of modernization that took place within this timeframe. 2 As others have noted (Froese and Pfaff 2005), the claims of secularization theory are also incongruous with the religious revival of other Eastern Bloc nations. Furthermore, secularization theory falls short of an explanation for the combination of differences between East and West Germany, namely that the latter is more religious (Smith 2012;Müller et al. 2016) and modernized (Uhlig 2008) than the former. In this article, I find the connection between symbolic threat and identity formation to be a useful mechanism to make sense of religious change. Central facets of the GDR identity, such as socialist voting (Grix 2000), eastern consumerism (Blum 2000), and secularity (Froese and Pfaff 2001), emerged at a time when the eastern identity was perceived to be threatened by the nature of Germany's reunification process. I connect this mechanism of eastern identification in the 1990s to a more recent symbolic threat, namely the growing fear of Germany's migrant population. Although secularity is commonly associated with left-wing ideology, this article finds a dramatic reduction in religious activity among the East German right-wing, which in recent years, has become virtually identical to the leftwing by all tested metrics of religiosity. This article explores the role that fear of foreign domination may play in this reactive formation of the secular identity. Previous research has not connected this process of reactive identification to the recent secularization of the political right. To my knowledge, this is the first article to find a period of right-wing secularization in any context. 3 To make sense of these changes, I first describe the mechanisms of group threat theory upon which my core argument relies. Next, I discuss the sharp religious decline during the reign of the GDR in light of the various forms of religious persecution that took place throughout the regime's existence. From here, I document religious changes in formerly Soviet-occupied countries after the collapse of the Soviet Union, and present East Germany's exceptionalism (i.e., its continued secularization) as a reaction to the cultural threat of western-led reunification. In the following section, I connect this same mechanism of reactive secular identification to the more recent threat of migrant arrivals. To empirically illustrate these dynamics, I test the ways in which political ideology and xenophobia relate to religiosity in East and West Germany. I find recent spikes in secular identification among the right-wing, as well as a strong relationship between secularity and fear of foreign domination. Finally, I discuss the potential extension of this social logic to other contexts, and emphasize the ways in which cultural vulnerability and, in turn, reactivity could pervade country and subject.
---
Reactive Identification
The connection between symbolic threat dynamics and religious change relies on the contention that symbolic threat can lead to increases in salient collective characteristics among culturally threatened groups. The logic of this theoretical claim is similar to that of competitive threat theory, which contends that threat contributes to the development of prejudice and in-group identification (Blumer 1958;Tajfel 1982). Competitive threat theories are primarily concerned with conflict over scarce resources between competing groups, as well as the relative size of the threatening "outgroup" (Blalock 1967;Quillan 1996). In addition to materialist matters (e.g., unemployment, economic competition, etc.), a great deal of research has also been sensitive to the ways in which boundary making and collective identification connect to ideational dimensions, such as race, religion, nationhood, language, and culture (Wimmer 2008;Sarjoon et al. 2016;Lubbers and Coenders 2017;Gorman and Seguin 2018;Lam et al. 2023).
The concept of symbolic threat by itself is thus hardly a novel one. It is the connection between threat dynamics and theories of religious change that has been largely overlooked, however. Past research has focused on the relationship between symbolic threat and identity formation (e.g., Glaeser 2000;Gorman and Seguin 2018), though this article analyzes these dynamics over time and integrates them into a discussion of religious change. Central to the linking of these seemingly disparate theoretical domains is the impact that the "outsider" has on the formation of identity among members of the threatened group.
When the dominant group senses that the subordinate group has improved its economic or symbolic position (and thus, challenges the position of the dominant group), they react defensively and develop feelings of prejudice (Blumer 1958;Quillan 1995; see also Stipisic 2022). When this perception is formed, they view members of the threatening group not on the basis of their respective individualities, but rather on the abstract image of their group membership. In turn, members of the dominant group develop a collective perception of themselves in relation to the subordinate group, for it is the relative position of the latter that defines the boundaries of the former. Threat thus plays a role in the process of identity formation, for such formation is often done in light of the encroachment of the other. Glaeser (2000, p. 399), for example, likens the construction of identity to "a ping-pong of identifications between self and other." This reactive process of identity formation among members of the threatened group necessarily depends upon their relationship with the otherized group. Reactions to threat vary with context, for such reactivity depends upon the overarching historical and cultural particularities through which it is mediated.
In this article, secularity will be understood as an East German in-group attribute that emerges in reaction to perceived ideational threat. I propose that a focus on symbolic threat dynamics can help make sense of East Germany's exceptional patterns of religious decline. By analyzing this pattern over time, I here demonstrate how secularization repeatedly surfaces among East Germans, who feel a sense of cultural "infiltration" from outsider groups. To understand the nature of this reactive identification, it is crucial to be cognizant of the historical and cultural idiosyncrasies at hand.
---
Religion, Materialism, and the GDR
From its founding until its collapse, the GDR was committed to promoting scientific materialism through atheist proselytizing. When the GDR was founded in 1949, over 90 percent of the population belonged to the church. By the time of its dissolution in 1990, this figure had dropped to just under 30 percent (Institut für Demoskopie 1990; Pollack 2002). In an effort to popularize the socialist personality, the GDR attempted to expedite the "withering away" of religion through several means (e.g., citizens with open religious convictions faced occupational and educational discrimination, pastors and congregations were often harassed by state officials, historic churches were demolished, religious organizations and charities were eradicated, religious instruction was removed from classrooms, etc.). Under the GDR, such persecution was justified by the belief that religion was inherently inimical to the advancement of socialist objectives and wholly incompatible with the Wissenschaftliche Weltanschauung (scientific worldview), a term which was printed on nearly every GDR document.
The GDR's framing of an ideological juxtaposition between science and religion is, perhaps, best exemplified by the Jugendweihe (ceremony of youth). The Jugendweihe is a ceremony that originated in the mid-nineteenth-century, was abolished in 1950, and then reintroduced by the GDR in 1954 (Besier 1999). Although it has taken on several meanings since its inception, the Jugendweihe was an atheistic and socialist ceremony which 14-year-olds participated in throughout the reign of the GDR. Participants swore an oath of loyalty to the socialist State and science as opposed to regressive, irrational modes of thought. Preparatory classes for the Jugendweihe involved readings that emphasized pride in GDR culture, and presented scientific explanations as superior to religious teaching. Participation in the ceremony was treated as a prerequisite for educational opportunities and coveted employment.
It is thus no surprise that Jugendweihe participation skyrocketed shortly after its reintroduction (see Figure 1). In 1955, only 17.7 percent of adolescents participated, though just five years later, this figure shot up to 87.8 percent. By 1980, 97.5 percent of GDR youth took part in the ceremony (Droit 2014). The Jugendweihe was meticulously organized to be in competition with religious ritual, as it took place on Sunday mornings in spring, and its structure mirrored that of religious confirmation (e.g., the ritual included music, lectures of morality, etc.). It appears that the GDR was successful in their efforts of conversion, as the sharp rise in Jugendweihe participation was accompanied by an equally staggering decline in religious confirmations. As can be seen in Figure 1, participation in religious confirmation decreased by 45.7 percent between 1955 and 1960. The Church, of course, did not take kindly to the reintroduction of the Jugendweihe nor any of the State's attempts to eliminate the religious imagination. Consequently, Church-State relations became particularly fractious in the 1950s and 60s. The State harassed pastors and congregations, published defamatory articles on religious leaders, supervised the Church's regional papers (which required State approval prior to publication), demolished historic churches, and significantly reduced church subsidies (Ramet 1992). This is not to suggest that the extremity of Church-State tension did not vary throughout the reign of the GDR. With the 1978 Church in Socialism agreement, for example, the Union of Protestant Churches committed to political indifference in exchange for greater Church autonomy. This degree of independence granted dissidents a space for non-compliance, evidenced by the formation of about 100 independent peace groups (nearly all of which were sheltered by a church) (Pfaff 2001). The Church, of course, did not take kindly to the reintroduction of the Jugendweihe nor any of the State's attempts to eliminate the religious imagination. Consequently, Church-State relations became particularly fractious in the 1950s and 60s. The State harassed pastors and congregations, published defamatory articles on religious leaders, supervised the Church's regional papers (which required State approval prior to publication), demolished historic churches, and significantly reduced church subsidies (Ramet 1992). This is not to suggest that the extremity of Church-State tension did not vary throughout the reign of the GDR. With the 1978 Church in Socialism agreement, for example, the Union of Protestant Churches committed to political indifference in exchange for greater Church autonomy. This degree of independence granted dissidents a space for non-compliance, evidenced by the formation of about 100 independent peace groups (nearly all of which were sheltered by a church) (Pfaff 2001).
Open discussion and dissident organization, however, were nothing but vexations for the GDR, which banned church periodicals and physically interfered with protests. This dissidence was, perhaps, most pronounced in the Saxon city of Leipzig. The Monday demonstrations in Leipzig were arguably the most influential (certainly the most wellknown) of protests that occurred during the Peaceful Revolution of 1989. The Nikolaikirche, a twelfth-century church in Leipzig, provided a space for dissidents to escape the reluctant conformity demanded of them in non-autonomous spaces. It is also worth noting that the Monday demonstrations, at which hundreds of thousands of protestors famously chanted Wir sind das Volk (we are the people), took place right after evening peace prayers.
As is well-known, the fall of the GDR shortly followed the Peaceful Revolution of 1989. The new elections in March of 1990 resulted in a coalition government led by Lothar de Maiziere and, in October of the same year, the country was reunified. Fourteen pastors served as members of the transitional parliament; and four as members of de Maiziere's cabinet. Several Holy Days became recognized as State holidays, and nearly all State pressures against the Church were removed (Ramet 1992). With democratization and reunification came a host of societal and political changes, including the deregulation of religion. Participation (%) Open discussion and dissident organization, however, were nothing but vexations for the GDR, which banned church periodicals and physically interfered with protests. This dissidence was, perhaps, most pronounced in the Saxon city of Leipzig. The Monday demonstrations in Leipzig were arguably the most influential (certainly the most wellknown) of protests that occurred during the Peaceful Revolution of 1989. The Nikolaikirche, a twelfth-century church in Leipzig, provided a space for dissidents to escape the reluctant conformity demanded of them in non-autonomous spaces. It is also worth noting that the Monday demonstrations, at which hundreds of thousands of protestors famously chanted Wir sind das Volk (we are the people), took place right after evening peace prayers.
---
Religious Confirmation Jugendweihe
As is well-known, the fall of the GDR shortly followed the Peaceful Revolution of 1989. The new elections in March of 1990 resulted in a coalition government led by Lothar de Maiziere and, in October of the same year, the country was reunified. Fourteen pastors served as members of the transitional parliament; and four as members of de Maiziere's cabinet. Several Holy Days became recognized as State holidays, and nearly all State pressures against the Church were removed (Ramet 1992). With democratization and reunification came a host of societal and political changes, including the deregulation of religion.
---
Perceived Threat of West German Domination
Proponents of religious economy theory (Stark and Iannaccone 1994;Stark and Finke 2000) may expect this distancing from GDR culture to be reflected in the change of religious demographics. However, the continued secularization of East Germany in the years following religious deregulation are not cooperative with this postulate. Proponents of secularization theory (Bruce 1999;Norris and Inglehart 2004) can explain East Germany no better, for the dramatic religious decline between 1990 and 1996 does not correspond to the extent to which East Germany modernized within this short six-year timeframe. Furthermore, the contentions of secularization theory are incongruous with the religious revival of the other post-communist countries during this time. As others have observed (Froese and Pfaff 2005), East Germany is clearly an anomalous case, for unlike other Eastern Bloc nations, it cannot be explained by the prevailing theoretical paradigms of religious change. To better understand this anomaly and, in turn, improve upon these theories, focus must be directed toward the peculiarities of East German conditions.
While citizens of the other Soviet-occupied countries retained their national identities after the collapse of the Soviet Union, the GDR was subsumed under their western counterpart's Federal Republic of Germany (FRG). East Germans were no longer citizens of their former GDR, though a sense of two national identities would persist. Although over 90 percent of East Germans were in favor of reunification (Howard 1995), the unification process was abrupt, western-led, and perceived as a "colonization" of East German society. Unlike West Germans, East Germans were hurled into a new situation in which they had to learn and adapt to their new society and national identity. A division within technical unity was palpable throughout all of Germany, particularly in the eastern regions. In a 1993 poll, for example, only 6 percent of East Germans and 14 percent of West Germans viewed East-West relations in a positive light, with 68 percent of East Germans blaming the West for said polarization. By 1997, 67 percent of East Germans claimed that they feel more East German than German (Hogwood 2000).
East Germans were symbolically ostracized, as they were (and, to an extent, still are) often the recipients of cultural teasing (e.g., attacks made against their accents, level of intelligence, etc.). In his ethnographic work on eastern and western Berlin police officers, Glaeser (2000) observes how West Germans would rarely view East Germans as equals, often characterizing eastern society as an inferior state from which the West had nothing to learn. From the West German perspective, East-West relations were "something like that between adults and children, where adults do not consult with their children on serious matters until they have grown up to become adults themselves" (Glaeser 2000, p. 329).
As for materialist concerns, it became more difficult for East Germans to find employment, as many of their formerly legitimate qualifications were deemed obsolete with reunification. This was a particularly troubling development for East Germany, which had an unemployment rate that nearly doubled that of West Germany (Grix 2000). In 1990, over 70 percent of East Germans had a positive opinion about the economy, though this percentage shrank to slightly below 20 percent by 1996 (Grix 2000). In 1995, nearly three-quarters of East Germans agreed that former GDR citizens are treated like second-class citizens in unified Germany (Howard 1995). This dissatisfaction became so prevalent that nearly half of East Germans in 1996 even saw GDR times as "good times" where "everyone was equal and we were all in work" (Hogwood 2000, p. 59).
In reaction to the sudden, western-led changes of the 1990s, the phenomena known as Ostalgie (the term's compounds being Ost (east) and Nostalgie (nostalgia)) and Trotzidentität (identity of contrariness) surfaced in political and cultural ways. Between 1990 and 1998, for example, sympathy toward socialist views more than tripled among East Germans (Froese and Pfaff 2005). The Party of Democratic Socialism (PDS), a descendant party of the GDR, benefited from this socialist sympathy, as the vast majority of their support came from East German states (Statistisches Bundesamt 2017a). Eastern forms of defensiveness were not limited to economic assessments and protest voting, but also operated in symbolic domains. For example, research on consumer trends in the former GDR show that old products of eastern origin became increasingly popular in the mid-to-late 1990s (Blum 2000). This was not a mere fad, as 45 percent of East Germans claimed to deliberately purchase eastern products as often as possible (Howard 1995). A similar form of this eastern contrariness can be observed in the post-reunification revival of the Jugendweihe. With reunification, the ceremony was no longer required for certain employment and educational opportunities. Jugendweihe participation thus dipped immediately after the fall of the GDR, though it surprisingly rose after a few years of experience in unified Germany, with over 60 percent of East Germans voluntarily participating in the ceremony by 1999 (Saunders 2002).
With these indications in mind, it appears as if it was the collapse of the GDR that led to its symbolic return. Politically and culturally, a reactive connection with the GDR identity materialized throughout East Germany shortly after reunification. Identity formation depends upon the nature of the threat imposed by the otherized group (Blumer 1958;Glaeser 2000;Wimmer 2008). In reaction to the threat of western domination of the East, central facets of the eastern identity, such as socialist sympathy, eastern consumerism, and Jugendweihe participation were expressed in a voluntary manner. Among such reactions of eastern contrariness was the secular identity, as indicated by the sharp decreases in church attendance, belief in God, and self-assessment of religiosity in the years following reunification (recall Table 1). Relatedly, Pollack (2002) finds that East Germans' trust in the Church declined rapidly in the 1990s, and suggests that this decline could be attributed to the common perception of the Church as either being or becoming a western institution. Literature in social psychology indicates that, in reaction to threat, groups seek a sense of social order (Kay et al. 2009); a process which can involve identifying with salient collective traits. As demonstrated by the examples here provided, this reach for restoring normalcy and tradition could be observed culturally, politically, and religiously in the years following the collapse of the GDR. It is in this light of Ostalgie and Trotzidentität which the secularization of the 1990s will be understood. 4
---
Perceived Threat of Migrant Arrivals
More recently, a new cultural interaction has led to the materialization of a similar, though distinct form of eastern reactivity. Sharp increases in Germany's migrant population have brought immigration and asylum seeking to the center of political discussion in Germany. Between Chancellor Angela Merkel's 2005 victory and the 2017 federal election, 14,534,644 immigrants and asylum seekers have arrived to Germany, resulting in a net migration of 3,887,599. In 2015 alone, 2,136,954 immigrants and refugees (net migration of 1,139,402) arrived to the country (Statistisches Bundesamt 2019a). Out of all first-time asylum applicants in European Union (EU) Member States, 61 percent registered in Germany in the first quarter of 2016 (Juran and Broer 2017). Over 20 million residents; that being approximately one-fourth of the German population, reported having a migrant background in 2018 (Statistisches Bundesamt 2019b).
Similar to the speed and extent of the country's demographic changes, citizen opinion on (particularly Muslim) immigration has dramatically changed. In 2016, 41.4 percent of German respondents rather or entirely agreed that entry of Muslims into Germany should be prohibited; a percentage which has nearly doubled since the 2011 edition of the survey was conducted. Within the same timeframe, 50 percent of Germans reported that they feel like a foreigner in their own country; a figure that has increased by 19.8 percent in just 5 years (Decker et al. 2016). Out of all religious groups, Germans view Muslims the most negatively, with anti-Muslim sentiment most prevalent in the eastern regions of the country (Pickel and Yendell 2014;Pickel 2018). The majority of East Germans perceive Islam as a threat to Germany, and just over 10 percent agree that Islam is compatible with German society (Pickel 2018). According to all measurements, anti-Islamic views are on the rise in Germany, and are particularly strong in the East.
Perhaps the most notable manifestation of this reaction to migrant arrivals is the emergence of the Alternative für Deutschland (AfD) or "Alternative for Germany", a far-right populist party founded in 2013. Some of the section headings in the party's manifesto include "German as Predominant Culture Instead of Multiculturalism", "Islam and Its Tense Relationship with Our Value System", "Islam Does Not Belong to Germany", "Tolerate Criticism of Islam", "No Public Body Status for Islamic Organizations", and "No Full-Body Veiling in Public Spaces" (AfD 2017). Consistent with the manifesto's content, past research has found limiting the arrival of Muslim immigrants to be the chief concern of AfD supporters (Stier et al. 2017;Pickel 2018). Ideological prioritizing of this kind is also characteristic of the Patriotic Europeans Against the Islamization of the Occident (PEGIDA), a movement which, like the AfD, advocates for the limiting of Muslim influence in Germany. Supporters of PEGIDA reference immigration control, nationalism, and Islam among their top reasons for joining the movement, with over 80 percent of them fearing the loss of their national identity (Daphi et al. 2015).
This nativism has been no more palpable than in the former GDR states, particularly in the state of Saxony. Saxony has long been a symbol of pride in German heritage and, as has been touched upon, was home to some of the most notable demonstrations of the Peaceful Revolution of 1989. In recent years, however, Wir sind das Volk has taken on a jingoist meaning, as it is now often found on the signs held at anti-immigrant demonstrations. Within just six months after their founding, PEGIDA held 252 rallies, with protestors totaling to approximately 240,000. Of these some 240,000 protestors, 80 percent of them protested in Saxony (Virchow 2016). Like PEGIDA support, AfD support is primarily concentrated in the eastern regions of the country (Statistisches Bundesamt 2017b). Between the 2013 and 2017 federal elections, all of the highest upsurges in electoral support for the AfD occurred in East German states: Saxony (20.2% increase), Thuringia (16.5% increase), Saxony-Anhalt (15.4% increase), Brandenburg (14.2% increase), and Mecklenburg-Western Pomerania (13.0% increase) (Lees 2018). While the AfD/PEGIDA message has not gained much traction in the western regions of Germany, it has considerably resonated with East German sensibilities. Notably, church commitment is negatively correlated with AfD support (Huber and Yendell 2019) and non-religious voters are substantially more likely to vote for the AfD than mainstream political parties, such as the Christian Democratic Union and the Social Democratic Party (Arzheimer and Berning 2019). 5
Secularization: A Recurring Reaction to Symbolic Threat Among those concerned with the preservation of such symbolic dimensions of their identity, we may again be witnessing a reaction of eastern identification in the face of cultural threat. Not unlike the eastern secularity that surfaced shortly after the western-led process of reunification, a considerable number of East Germans may now be responding to what they perceive as the newest form of cultural "infiltration", namely the recent spike in migrant arrivals. This more recent cultural contrariness, I posit, may be associated with the intensification of the eastern personality. When a group perceives a threat, said group will pursue order and normalcy through a process of reactive identification. Many among the right-wing perceive the arrival of immigrants as a threat to their identity (Stier et al. 2017;Yendell and Pickel 2019) and thus, it may be the case that yet another culturally defensive reaction (mediated by the overarching secular history of East Germany) of secularity is emerging. Although other researchers have noted the religious decline of the 1990s (Froese and Pfaff 2005), the literature has yet to explore the more recent secularization of the political right. Both reactive secularities, I contend, revolve around the mechanism of defending the eastern identity in the face of cultural threat.
Related research has surfaced in recent years, though it would be an overstatement to claim that the literature has achieved scholarly consensus. Inglehart and Norris (2016) find a positive association between religiosity and right-wing populist support, others have found religiosity to have little effect on radical right voting (Arzheimer and Carter 2009;Montgomery and Winter 2015), while other research claims that the relationship is further complicated when religious orthodoxy is brought into the scope of analysis (Immerzeel et al. 2013). Although the relationship between secularity and right-wing ideology has received some attention, the change in this relationship (i.e., secularization) over time has been repeatedly overlooked. Thus, the current theoretical backgrounds are severely limited, leaving changes in the given political landscape(s) unconsidered. To advance toward more lucid determinations, future inquiry should be cognizant of the potentially catalytic contextual factors pertinent to their research, how these factors have changed over time, and in turn, affected the given relationship(s) under examination.
Right-wing populist support is a telling phenomenon which provides indications of social logics relevant to the present discussion, though it does not entirely encompass the symbolic variant of group threat with which this work is concerned. The cultural defensiveness that materializes with the detection of a cultural threat, is not only a political, but also an emotional reaction which may operate within, but also outside of AfD voting and right-wing ideology in general. This reactive emotionality and identification have more to do with pure antipathy toward and fear of the otherized group than it does with party support and/or policy position. The conceptual demarcation here is in accordance with a characterization from Quillan (1995, p. 587), who notes that "prejudice is characterized by irrationality (a faulty generalization) and emotional evaluation (antipathy)." Thus, in addition to considering religious change among the right-wing, this work taps into this emotionality by examining fear of foreign domination as it relates to the secular identity.
In the years following increases in migrant arrivals, xenophobic attitudes are on the rise in Germany (Decker et al. 2016), and are particularly prevalent among East Germans (Yendell and Pickel 2019), as well as right-wing groups (Daphi et al. 2015;Stier et al. 2017). Similar to the Ostalgie of the 1990s, it is possible that the more recent perceived threat of migrant arrivals has led to a defensive reaction of secular identification among the right-wing and culturally vulnerable in the East. Assuming this is indeed the case, I expect to find a significant religious decline among the East German right-wing between 1999 and 2017. If secularity is a part of the eastern reactivity I have here described, then a positive relationship between fear of foreign domination and the secular identity will be found in the context of East Germany.
---
Methods
To test these expectations, I rely on data from EVS and ALLBUS. Notably, I am unable to fully establish construct validity with the available data, though this analysis still provides empirical indications that align with my theoretical contentions. I use the 1999 and 2017 datasets from EVS to examine religious change by political ideology. 6 The association between xenophobia and religiosity is measured with the ALLBUS 2018 dataset, as it contains a metric that most precisely captures the fear of foreign domination with which this work is concerned.
---
Dependent Variables
Religiosity is the dependent variable in the current study. I measure religiosity with binary indicators of church attendance, belief in God, and self-assessment of religiosity. Each dimension of religiosity is analyzed separately to capture their respective nuances (e.g., religious behavior, belief system, self-identification, etc.). When using the EVS datasets, respondents are coded as attending church if they report attending "once a month" or more frequently. When asked if they believe in God, respondents are given "yes" or "no" as response options. For self-assessed religiosity, respondents are coded as religious if they consider themselves to be "a religious person", as opposed to responses that earn an irreligious categorization: "not a religious person" and "a convinced atheist." When using the ALLBUS dataset, respondents are considered to attend church if they report attending "1 to 3 times a month" or more. The following responses are categorized as believing in God: "I believe in God now and I always have" or "I believe in God now, but I didn't used to." For self-assessed religiosity, respondents are considered religious if they describe themselves as "extremely religious", "very religious", or "somewhat religious". 7
---
Independent Variables
Political ideology and fear of foreign domination are the focal independent variables. "Left-wing", "moderate", and "right-wing" are the three categories representing political ideology. In each dataset, this categorization is determined by self-identification on a scale ranging from 1 to 10, with 1 representing the farthest left; and 10 the farthest right. Respondents who place themselves between 1 and 4 are coded as politically left-wing; 5 as political centrists, and between 6 and 10 as politically right-wing. 8 Respondents are considered to exhibit fear of foreign domination if they "tend to" or "completely" agree that "because of its many resident foreigners, Germany is dominated by foreign influences to a dangerous degree". 9 Fear of foreign domination is coded as a binary variable.
---
Control Variables
I control for age (measured continuously), sex (1 = male), level of education (using each dataset's categorizations of "lower", "medium", and "higher", which correspond to lower than upper secondary education, upper secondary education, and higher education, respectively), and monthly income (using EVS's categorizations of "lower", "medium", and "higher", which in 1999, correspond to the categories "under 3000 Marks", "3000-4999 Marks", and "over 4999 Marks", respectively. In EVS 2017, the monthly income brackets I control for are the following: 2200 Euro or less, 2201-4250 Euro, and over 4250 Euro). ALLBUS 2018 and EVS 2017 use slightly different income brackets. When using the ALLBUS dataset, I divide income level into the following three categories: under 2250 Euro a month, 2250-3999 Euro a month, and over 3999 Euro a month.
---
Analytic Strategy
In the following analyses, East and West Germany are analyzed separately. 10 This comparison assists in conjecturing as to whether the expected relationships are connected to the overarching secular history of the GDR, or alternatively, if they simply exist throughout all of Germany. Furthermore, both secularity and anti-immigrant attitudes are more prevalent in East Germany than they are in West Germany. Thus, the two regions must be divided in this analysis to address any potential issues pertaining to a regional confounder. In West Germany, I do not expect to find a period of right-wing secularization, nor a strong relationship between secularity and fear of foreign domination, for western forms of cultural reactivity are not mediated by an atheistic history. I thus suspect these findings to be particular to the eastern regions of the country.
Using the EVS datasets, I report the percent change in religiosity among each political ideology category between 1999 and 2017. To test whether the association between religiosity and political ideology changes over time, I use multivariate models that account for controls, append the EVS 1999 and 2017 datasets, and interact time with political ideology. Specifically, I estimate logistic regression models to predict religiosity and interact a binary indicator of survey year with the categorical measure of political ideology. Following the guidance of Ai and Norton (2003), I do not interpret the coefficient of the interaction term, but instead assess the interaction using predicted probabilities and marginal effects. The relevant marginal effects, calculated as the difference in the predicted probability of religiosity between the left-and right-wing within each year (and differences across years), are presented.
Negative views of immigrants are, of course, not entirely subsumed under the political right. To provide further indicative support of my theoretical claims, it is thus important to examine not only right-wing ideology, but more particularly, fear of foreign domination in order to understand the cultural reactivity in the East. Using the ALLBUS 2018 dataset, I estimate additional multivariate logistic regression models (in both the East and West) to predict religiosity, with fear of foreign domination as the focal independent variable. The estimates are presented as odds ratios. I then test whether the association between fear of foreign domination and religiosity are significantly different by region. To do so, I interact a binary indicator of region (i.e., East or West Germany) with fear of foreign domination and assess the interaction using predicted probabilities and marginal effects (see Table A1 in Appendix A).
---
Results
Table 2 shows that the right-wing in East Germany have demonstrated sharp drops in religiosity by all of the tested metrics. The same degree of religious change cannot be observed among the politically moderate and left-wing, which demonstrate slight decreases and increases in religiosity, respectively. With this period of religious change, the rightwing has become as secular as moderate and left-wing respondents. In fact, the right-wing reports the lowest rates of church attendance among the three ideological categories. This pattern appears to only pertain to East Germany, for secularization is not particular to the right-wing in West Germany. To be clear, it is not the case that East Germany, including respondents of all ideologies, has dramatically secularized between 1999 and 2017, as religiosity rates have remained relatively static within this timeframe (data available upon request). Rather, the period of secularization shown is particular to the East German rightwing, who are more likely to exhibit fear of foreign domination than their left-wing and moderate counterparts (Yendell and Pickel 2019). In Figure 2, a significant reduction in religious activity among right-wing respondents can be observed between 1999 and 2017 in East Germany. In 1999, right-wing respondents in East Germany are, by all metrics, more likely to be religious than are left-wing respondents (p < 0.001). These relationships are stronger and more significant in the East than they are in the West (see Figures 2 and3). Given the atheist proselytization that took place during the reign of the GDR, a perceived incompatibility between religious thought and left-wing ideology was likely not uncommon among East Germans in the years following reunification. This perception of mutual-exclusivity was, perhaps, not as pronounced in the historically less regulated West. In 2017 East Germany, however, the political right has become virtually identical to the political left by every measure of religiosity (see Figure 2). Indeed, political ideology is no longer a predictor of religiosity in the way that it was in 1999. The results indicate that this phenomenon is particular to the East, as the religious differences between left-and right-wing respondents remain significant in West Germany (p < 0.001) (see Figure 3 and Table 3). In East Germany, I find a significant reduction in the religious disparity between the left-and right-wing in terms of church attendance (p < 0.001), belief in God (p < 0.05), and self-assessment of religiosity (p < 0.01). By no metric of religiosity are these changes significant in West Germany. In the years following increases in migrant arrivals, the secular identity has become more common among the political right in East Germany. This phenomenon is particular to the right-wing, as respondents of the other political orientations do not come close to mirroring this degree of secularization.
In accordance with EVS data, data from ALLBUS show that political ideology does not predict religiosity in East Germany. Unlike data from EVS, however, Table 4 shows that the relationship between political ideology and religiosity is insignificant in West Germany. While there are no longer significant religious differences across the political aisle, fear of foreign domination holds a strong inverse relationship with church attendance (p < 0.001), belief in God (p < 0.01), and self-assessment of religiosity (p < 0.01) in East Germany (see Table 4). It is clear that this relationship is more palpable in the East, as the associations do not achieve statistical significance and vary in direction in the West. These results demonstrate that it is not a confounding right-wing ideology in general, but more specifically a fear of foreign domination that predicts the secular identity in East Germany.
right-wing, as respondents of the other political orientations do not come close to mir ing this degree of secularization. While it is a cultural rather than socioeconomic threat with which this work is concerned, it is important not to overlook the role of variables related to deprivation. East Germany is not as economically stable as West Germany and thus, one could reasonably suspect forms of cultural identification in the East to be linked to economic intimidation.
As for the reactive secularity under examination, however, the data do not reflect this anticipation. In relation to the tested metrics of religiosity, level of education and income are inconsistent in direction and do not once achieve statistical significance in East Germany. There is no evidence to suggest that socio-economic variables link to the reactive process of secular identification in this analysis. This finding may suggest that this phenomenon is more likely a matter of cultural competition, rather than of economic worry.
---
Discussion
Although the prevailing theories of religious change cannot explain the case of East Germany, I have here suggested that a focus on symbolic group threat may help make sense of this empirical puzzle. The impact cultural threat can have on identity formation has been largely ignored in discussions of religious change. While symbolic threat theorists have been cognizant of the relationship between cultural threat and identity formation, my article examines these dynamics over time and integrates them into a discussion of religious change. When a group perceives a cultural threat, increases in central characteristics of the identity of said group (which may be relevant to a wide array of sociological subfields) can be observed.
This article has outlined how, against the prevailing theoretical expectations, East Germany became increasingly secular after the process of reunification and consequent deregulation of religion. To understand this exceptionalism, I theorize that this period of secularization is connected to the Ostalgie/Trotzidentität phenomena which surfaced in reaction to the threat of West German domination. I then contend that a similar mechanism of reactive identification has recently surfaced on the right in the face of a new cultural threat, namely the fear of Germany's migrant population. Between 1999 and 2017, the data show that sharp drops in religiosity have occurred among the political right in East Germany. This is the first article to produce a finding on right-wing secularization in any context.
These decreases are particular to the East German right-wing, as they cannot be observed among left-wing and moderate respondents. Nor is this pattern found in the context of West Germany. Differences between East and West Germany, I argue, could be understood in light of their respective overarching religious histories. Secularity, for example, was inextricable to the character of the GDR in a way that it was not to that of the Bonn Republic (i.e., the former West Germany). This article also demonstrates that secularity and fear of foreign domination are closely related in the East, whereas the associations are not at all noteworthy in the West. It appears that the eastern trait of secularity repeatedly emerges among those who perceive cultural threats, such as reunification in the 1990s and liberal immigration law in recent years.
Given the cultural and economic differences between East and West Germany, both materialist and symbolic threats are worthy of consideration. Within the process of secular contrariness, however, the results provide little reason to suspect that this reactivity is in connection with materialist concern. Neither education nor income play a role in the reactive secularity under examination. This finding may indicate that it is not economic vulnerability, but rather fear of cultural denigration that plays a role in the materialization of the reactive identity. Further study on this matter is warranted, however, as previous findings concerning the associations between conditional variables and forms of cultural reactivity (e.g., anti-immigrant behavior, far-right violence, right-wing populist voting, etc.) have been mixed (Koopmans and Olzak 2004;Lengfeld 2017;Patana 2020).
Although it is the intricacies of only one country that I have here examined, I encourage other researchers not to overlook the importance of thorough historical, political, and cultural investigation of a specific context. The nature of the reactive identity will depend upon the contextual factors (e.g., salient collective characteristics, religious history, etc.) in which it is engulfed. In-depth analyses of certain conditions, particularly when they deviate from prevailing theory and expectation, can inform our theoretical development by identifying catalytic factors, which would go undetected with broader (though, more cursory) analyses. It is important for detailed examination of this kind to be in conversation with broader comparative approaches, for such collaboration can help inform future criteria used in the process of selecting which variables to examine. As Voas andChaves (2016: 1549) note, religious differences between countries "are a matter of history and culture, and explaining them always requires a combination of the general and the particular".
Territorial shifts, immigration policy, and religiosity are certainly not the only factors that could be tested within the theoretical framework I provide. When a certain population perceives a threat to their identity (and that threat could, though need not be foreign influence), a reaction of collective identification (the form of which is mediated by the cultural characteristics of the given national history) may emerge. Religious change (and likely other changes of sociological interest) among the respective population could be understood in light of their relation to, and the unraveling of, the cultural and historical idiosyncrasies of the given context. Future research could identify one of the many contexts in which a nativist or culturally defensive population exists, select an aspect that is central to this population's national character, and observe how this trait changes before and after exposure to the perceived threat. The variables selected for analysis should vary by context, for it is a mechanistic focus on cultural vulnerability and, in turn, reactivity, which could provide potential avenues for replication.
The emphasis on contextual particularity is especially crucial when attempting to explain anomalous cases. In the case of East Germany, the prevailing theoretical paradigms of religious change fall short of an explanation and thus, an opportunity for theoretical improvement presents itself. Drawing on the peculiarities of East Germany, I have found a focus on symbolic threat and subsequent identity formation to be a useful point of departure. Little work has been done to understand the historical and cultural dynamics of secularization (for more discussion on this limitation, see Gorski 2000). To adequately elucidate the dynamics of cultural identification, researchers must be sensitive to the complexities of the historical and political conditions relevant to their analyses. Historicization of this kind can assist in analyzing important changes in light of culturally vulnerable reactions which are, by necessity, filtered by the overarching national character of the given context. As can be seen in Table 1, Poland also shows a period of religious decline between 1990 and 1996. Unlike East Germany, however, Poland was a religiously saturated country prior to the collapse of the Soviet Union.
2 Although a degree of modernization took place in East Germany with democratization, it is very unlikely that the sharp decreases in religiousness observed in Table 1 could be attributed to the relatively marginal modernization that took place within this very short timeframe of six years. I suspect that even the most radical secularization theorist would agree.
3 Related research has found religiosity to have a weak relationship with far-right voting (Arzheimer and Carter 2009;Montgomery and Winter 2015). My findings are distinct, however, as I examine not just right-wing secularity, but also right-wing secularization over time. It may also be worth noting that relative secularity can be found in certain forms of right-wing populist rhetoric in other contexts, such as the rhetoric of Geert Wilders in the Netherlands. However, whether such rhetoric has spurred secularization has not been demonstrated. This is not to suggest other factors did not contribute to this period of religious decline. For example, the religious decline observed during this time may also be due to factors, such as the introduction of church taxes (Froese and Pfaff 2005) and intergenerational religious change (Wolf 2008). This period of secularization is not limited to either of these factors, however. While the introduction of church taxes may lower church attendance, there is no reason to conclude that it would bring about such abrupt decreases in religious belief or identification (see Table 1). The same can be said of the effect of intergenerational religious change within the very short six-year timeframe contained in Table 1. 5 In terms of church attendance, belief in God, and self-assessment of religiosity, data from the European Values Study show that the AfD consistently report the second lowest rates of religiosity among all party constituencies in Germany (data available upon request). The only constituency less religious than the AfD is Die Linke ("The Left"), which is no surprise considering that Die Linke is the current descendant party of the GDR.
6
The 2017 data are from the integrated dataset (EVS 2017)-matrix design dataset (digital object identifier: 10.4232/1.13314). This was the latest version (2.0.0) of the dataset when this project began. It can be made available upon request by contacting GESIS Leibniz Institute for the Social Sciences (email: [email protected]).
7
Belief in God and self-assessment of religiosity are contained in the International Social Survey Programme module of the dataset and thus, there are fewer cases to observe for these metrics than there are for church attendance.
8
Although it may appear that this coding would minimize the potential size of the "moderates" group, it must be noted that "5" is by far the most frequent self-categorization, with 1465 cases (there are 692 cases for the next highest category ("6")). This coding is a function of the distribution of the ideology variable, as "9" and "10" are the two smallest categories, reporting 50 and 102 cases, respectively. Even among AfD supporters, 5, 6, 7, and 8 are each individually more frequent self-assessments than 9 and 10. There are considerably more cases on the lower end of the ideological spectrum, with 156 and 242 respondents categorizing themselves as "1" and "2", respectively. With the coding scheme employed for this dataset, there are 1705 left-wing cases, 1465 moderate cases, and 1553 right-wing cases. 9
Respondents who "tend to" and "completely" agree are analyzed together rather than separately to assure that there are enough cases to conduct the analysis. Combining these responses in the generation of this variable of fear of foreign domination is particularly important because this metric comes from the ALLBUS dataset, in which two of the three metrics of religiosity are contained in the International Social Survey Programme module, which is a subsample of the ALLBUS dataset. 10 EVS 2017 does not divide Berlin into its former East and West territories. To assure accurate geographical representation is achieved, I test both possible categorizations of Berlin for analyses using EVS 2017 data. In this article, Berlin respondents are excluded from the East German category and instead categorized as West Germans. However, I perform the same analyses with Berlin categorized as an East German city (not reported, but available upon request). This recategorization does not affect the results in a meaningful way.
---
Data Availability Statement:
The 2017 data are from the integrated dataset (EVS 2017)-matrix design dataset (digital object identifier: 10.4232/1.13314). This was the latest version (2.0.0) of the dataset when this project began. It can be made available upon request by contacting GESIS Leibniz Institute for the Social Sciences (email: [email protected]). The 2018 ALLBUS data are publicly available: https://search.gesis.org/research_data/ZA5272, accessed on 1 May 2023.
---
Conflicts of Interest:
The author declares no conflict of interest.
---
Appendix A
|
IMPORTANCE Evidence suggests that racial disparities in health outcomes disappear or diminish when Black and White adults in the US live under comparable living conditions; however, whether racial disparities in health care expenditures concomitantly disappear or diminish is unknown.To examine whether disparities in health care expenditures are minimized when Black and White US adults live in similar areas of racial composition and economic condition. DESIGN, SETTING, AND PARTICIPANTS This cross-sectional study used a nationally representative sample of 7062 non-Hispanic Black or White adults who live in 2238 of 2275 US census tracts with a 5% or greater Black population and who participated in the Medical Expenditure Panel Study (MEPS) in 2016. Differences in total health care expenditures and 6 specific categories of health care expenditures were assessed. Two-part regression models compared expenditures between Black and White adults living in the same Index of Concentration at the Extremes (ICE) quintile, a measure of racialized economic segregation. Estimated dollar amount differences in expenditures were calculated. All analyses were weighted to account for the complex sampling design of the MEPS. Data | Introduction
In 2011, the groundbreaking Exploring Health Disparities in Integrated Communities (EHDIC) study found that racial disparities in hypertension, diabetes, obesity among women, and use of health services either disappeared or substantially diminished when comparing a sample of Black and White residents of Baltimore, Maryland, who lived in racially integrated neighborhoods under comparable living conditions. 1 The results suggested that the social environment may largely drive racial health outcomes in US populations, emphasizing the role of place in understanding health disparities; however, that study left the question of whether, despite equal health outcomes, racial disparities in health expenditures would disappear or diminish.
Even at equal levels of health, racial disparities in health care expenditures could arise due to care that is delayed, not recommended, or avoided because of structural and interpersonal racism 2 or due to differences in quality of insurance or treatment. 3 Expenditures may be higher for Black individuals if delays lead to higher health care expenditures when unmet health needs escalate and become urgent and critical or if patients are shifted into more costly health insurance plans.
Expenditures could be lower if Black adults are unable to be retained in care due to biases in the health care system 2 or if care is underused due to differences in which practitioners are in network or due to insurance reimbursement structures.
Given the persistence and size of health disparities between Black and White individuals, 4 and in follow-up to the EHDIC study, 1 our analysis intended to answer the question of whether health care expenditure differences are minimized when Black and White individuals live in similar areas in the US. We defined areas that are similar by levels of racial and economic segregation. As an advancement from previous studies that have looked at health care outcomes in 1 city, 1,5,6 our analysis focused on health care expenditures of Black and White residents in census tracts across the entire US. We hypothesized that there would be no difference in health care expenditures by race when Black and White adults live under similar conditions of economic and racial segregation, in line with previous studies that suggest no difference in health outcomes by race when Black and White people live under similar conditions. 1,5,6
---
Methods
---
Data Sources
---
Study Population
The analytic sample included 7062 Black or White MEPS participants aged 21 years or older who live in 2238 of 2275 US census tracts from 47 states where the population is at least 5% Black. We excluded participants with outlier expenditures (total expenditures >$100 000, representing <0.5% of eligible participants) and any participant for whom we could not calculate an Index of Concentration at the Extremes (ICE) value. Another 335 participants were excluded from the analytical sample who did not have positive sampling weight. The eFigure in Supplement 1 presents the flowchart of sample selection. We excluded participants of Hispanic ethnicity and persons of other race groups (American Indian or Alaska Native, Asian, Native Hawaiian or Pacific Islander, and those who reported multiple races). Given the nature and history of segregation of Black and White individuals in the US, as well as the unique needs of more recently immigrated populations, the mechanisms underlying differences in health care expenditures for these other populations may be different and warrant separate analysis.
---
Health Care Expenditures
The dependent variables were a binary variable that indicated health care expenditure and a continuous variable for the total amount of health care expenditure on health care services in 2016.
In addition, variables were generated for 2016 health care expenditure by category of health service: office-based visits (eg, primary care and imaging tests), outpatient visits (eg, hospital-based care that does not require an overnight stay), emergency department (ED) visits, inpatient hospital stays, prescription medicines, and dental care visits. Expenditures in the MEPS are defined as the sum of direct payments, which include both the out-of-pocket payments and payments made by insurance.
---
Race and Race-Income Segregation
The independent variable was MEPS participant race, categorized as non-Hispanic Black or non-Hispanic White (hereafter referred to as Black and White). We excluded participants of Hispanic ethnicity and of other race groups. The analysis was stratified by census tract-level ICE values for race and income, a marker of racialized economic segregation. 8 The ICE measure was calculated as the difference between the number of White persons in high-income households (annual household income Ն$100 000) and Black persons in low-income households (annual household income <$20 000), divided by the total population with known income in the same census tract. 9 Quintiles (Q1-Q5) were selected to define strata to be consistent with prior research 9 using the ICE measure.
Quintiles for the ICE measure were computed based on the distribution among census tracts of all MEPS participants aged 21 years or older who lived in census tracts where at least 5% of the population was Black. ICE Q1 had the most population concentrated into the most deprived groups (low income, mostly Black individuals), and ICE Q5 had the most population concentrated into the most privileged groups (high income, mostly White individuals).
---
Statistical Analysis
Characteristics and health care expenditures of participants were summarized and compared across ICE quintiles using the Pearson χ 2 statistic for categorical variables and the Wald test after fitting weighted linear regression for continuous variables. Next, characteristics of Black participants were compared with characteristics of White participants living in ICE Q1 and Q5 (2 groups of census tracts with the most racialized economic segregation) and Q3 (the least racially and economically segregated census tracts). Two-part models were constructed to model health care expenditures (the overall expenditure and type-specific expenditures) comparing Black participants with White participants living in the same ICE quintile. Next, the incremental expenditures comparing Black participants with White participants (ie, the marginal effects) were estimated based on the combined part 1 and part 2 of the models. Part 1 of the model was a logit model estimating the odds of having any expenditures, yielding odds ratios (ORs) of having any total or type-specific health care expenditures, comparing
---
Results
Of the total 7062 MEPS Black or White respondents who lived in census tracts with a 5% or greater Black population in 2016, 33.1% identified as Black and 66.9% identified as White. As indicated in Table 1, the distributions of age (overall mean [SD], 49 [18] years) and sex (52.6% female and 47.5% male overall) were similar across income-race ICE quintiles; however, most other demographic characteristics of residents in these census tracts varied significantly by income-race ICE quintiles.
Higher ICE quintiles (ie, areas with greater White racial and economic privilege) had significantly fewer Black respondents, more people with post-high school education, higher income, less poverty, higher levels of employment and insurance, better mental and physical health, fewer comorbidities, fewer reports of difficulty paying medical bills, and highest access to a usual source of care.
As shown in Table 2, across income-race quintiles, there was an increasing trend in the median amount and likelihood of expenditures in total health care, office-based care, prescription drug use, and dental services from Q1 to Q5, with a decreasing gradient in ED expenditures. Mean outpatient and inpatient expenditures showed no clear pattern across the gradient. Across all expenditure categories, the percentage having any expenditure followed similar patterns.
A comparison of Black and White respondents suggested that even in areas where Black and White populations live under similar conditions, their landscapes are different (Table 3). In Q1, Black respondents were more likely to be female, have fewer years of education, have lower family income, have higher rates of public insurance or uninsurance, and have better mental health than White respondents. In Q3, Black respondents were more likely to be employed and to have no comorbidities but were otherwise demographically similar to White respondents. In Q5, Black respondents were younger and had lower income, greater exposure to poverty, nearly 3 times the rate of uninsurance, and almost twice the likelihood of having problems paying medical bills. The number of comorbidities was similar for Black and White respondents in Q1 and Q5, although Black respondents were likely to have fewer comorbidities in Q3. Ratings of good physical and mental health were similar for Black and White respondents in Q1, Q3, and Q5, except for a significantly greater percentage of Black (79.5%) compared with White (71.0%) respondents reporting good mental health in Q1 (P = .01).
In the part 1 fully adjusted models of expenditure within each of the income-race ICE quintiles ( for Black and White respondents was in Q3, the most racially and economically integrated area.
---
Discussion
Our analysis sought to answer the question of whether differences in health care spending among Black and White adults would be minimized when Black and White adults lived under similar conditions. Our results offer 2 key takeaways: (1) differences in the uptake and amount of annual health care spending by Black and White adults were minimal in areas where Black and White adults lived under similar conditions of minimal racial and economic privilege; and (2) in contrast, Black adults had 56% lower odds of having any total health care expenditures in areas of mostly White high-income adults (driven largely by less office-based, dental, and prescription drug spending) and, among those with any expenditures, spent 30% less on health care. Black adults in areas of mostly White high-income adults had increased odds of having any ED expenditures and reduced odds of having prescription drug or dental care expenditures, likely driven by the significantly higher rates of uninsurance for Black adults in these areas. There was no significant difference in the amounts spent when looking only among Black and White people who had any expenditure-a proxy for equitable health care access. Altogether, our results suggest that expenditure disparities may disappear, but only under conditions of both racial and economic equity and equitable health care access.
Black adults who spent at all on health care spent equal to or significantly less than their White counterparts who lived under similar social and economic contexts, with the exception of higher outpatient expenditures in Q2. In the areas that were mostly high income with mostly White residents, this amounted to $2145 less spent annually by Black adults. It is possible that Black adults have reduced odds of health care expenditures because they are healthier and do not need the care.
This hypothesis might be supported by the body of research suggesting that, due to long-standing disinvestment in Black neighborhoods, White people living in predominantly Black areas have poorer health than White people living in non-Black segregated areas. 14,15 However, in the following paragraph, we point to several results to support that lower odds and amounts of health care spending are more likely because Black adults are missing out on care that they need.
In areas of extreme racial and economic deprivation or privilege (Q1 and Q5), Black and White adults had equally good physical health and similar comorbidities, yet Black adults still had reduced odds of any health expenditures and lower spending. In the most integrated areas (least deprivation or privilege extreme), Black adults had fewer comorbidities yet similar overall health spending.
Among those who did any spending, Black adults in somewhat integrated areas (Q2) spent 64% more on outpatient care in the most integrated areas, but 38% less on inpatient expenditures, which White respondents could be driven by these fewer comorbidities, the specific category of care driving reduced odds of having any health care expenditure was dental care, not inpatient or outpatient care as would be expected for comorbidities. Routine dental care is considered elective care and incurs high expenditures that those with few resources or reserves might forgo; low use of dental services may be a better marker of a health disparity than having healthy oral care. 16 These findings work against the idea that reduced odds of spending by Black adults is due to better health, because even at equal levels of health, Black adults have lower odds of having health care expenditures than White adults. Rather, it may be that Black adults are forgoing care and may be underserved, despite being at equal health as White adults.
Lower odds of health care spending may be attributable to lower access to health insurance or poorer quality of insurance, which would be supported by our findings that most differences in health care expenditures disappeared when only looking at patients with any health care spending-a proxy for people who have entry to at least a minimum of health care. Furthermore, Black adults in the areas of highest White racial and economic privilege were 3 times as likely to be uninsured and have significantly lower income in those areas. Such low rates of insurance and lower economic resources likely mean lower use of health care, even when needed, because of affordability barriers.
Several studies have suggested that insurance access remains a barrier to timely and affordable care, even after implementation of the Patient Protection and Affordable Care Act, and leads to avoiding or paying higher out-of-pocket costs for care. 17 Even with insurance, Black adults often do not get high-value care and have fewer insurance options from which to choose, 18 which may reflect systems of structural racism in health care.
Because our study could not assess the reasons for health expenditures, we cannot know whether, for example, patients receiving low-value care at the ED could have otherwise received high-value primary care or whether their lack of primary care led to a more advanced condition that warranted ED care. A recent analysis found that Black Medicare beneficiaries are more likely than White beneficiaries to be admitted to a hospital or to seek care in an ED for conditions that would otherwise be managed through good primary care. 17 The referral to low-value care is often a response to poor insurance coverage, which our findings also support in the thrice-higher rates of uninsured Black adults in Q5 (highest White racial and economic privilege), coinciding with lower use of office-based services and greater use of ED expenses. Another analysis found that areas with high Black concentrations had fewer insurers participating, possibly suggesting that even when Black residents of these areas have access to health insurance, there are fewer offerings of insurers and fewer physicians offering in-network services. 18 Thus, similar to what we found for the area with the highest concentration of Black adults, higher expenditures on care may be due to less spending for office-based care with greater spending on ED care to compensate.
Another possible explanation for the lower expenditures for Black adults is that White adults are overusing care or receiving care at facilities that have higher costs for services. Although our study cannot assess the extent to which White patients are overcharged, previous work supports that White adults have more extensive use of health care services than Black adults, 19 have greater health care spending even when under the same insurance plans and at equal levels of health as Black adults, and spend a greater proportion on primary care or specialty care than ED care. 20 Although we cannot rule out the possibility of overuse or being charged at higher costs, the differences in distributions of socioeconomic position and health insurance for Black and White adults suggest that Black and White people in areas that are equal in racial and economic segregation may still live very differently and have different access to health care quality.
---
Limitations
This study has some limitations. Although our study aimed to explore differences in health care expenditures in areas where Black and White people lived under similar conditions and expanded on previous studies by looking at both racial and economic segregation, our analysis may not fully capture how Black and White adults live in similar conditions. Our analysis was restricted to census tracts with 5% or more Black residents. In doing so, many White respondents were excluded, whereas Black respondents were more evenly distributed across the income-race quintiles; however, this criterion also yielded more Black respondents in the highest quintile, which we otherwise would not have been able to assess because of few Black respondents.
---
Dr Dean had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. |
Modernization in China is accompanied by some specific features: aging, individualization, the emergence of the nuclear family, and changing filial piety. While young Chinese people are still the main caregivers for older adults, understanding the attitudes of young Chinese people toward aging and living independently in the context of modernization is important because it relates to future elderly care problems in China. By using in-depth interviews and qualitative methods, 45 participants were enrolled in the study, 38 (84.44%) were women and 37 (82.22%) had no siblings. The ages ranged from 17 to 25 years (mean age = 19.28, SD = 1.74). Results revealed that participants held diverse attitudes about older adults, but the general attitudes were that older adults are lonely, financially disadvantaged, have poor social support, lack hobbies, and care about their children more than themselves. Chinese college students were affected both by traditional filial piety and individualism; however, of the two, they seemed put greater value on independence. Moreover, traditional filial piety is changing in a modern direction, affected by Western ideas of individualism: the status of the senior is diminishing, and living with one's parents is no longer regarded as a necessary component. Implications concerning age stereotypes, elderly care policies, and strategies are discussed. | INTRODUCTION
Modernization is a broad concept that refers to major social changes that occur when a preindustrial society develops economically, such as industrialization, urbanization, and bureaucratization (Zhang and Thomas, 1994). Since the reform and opening up in 1978, China has been moving toward modernization with some specific features. First, China is becoming an aging society (Bai, 2016). From 1999 to 2018, the number of senior citizens of 65 years or older in China increased from 86 to 166 million (from 6.9 to 11.9% of the total population; Logan and Bian, 1999;National Bureau of Statistics of China, 2019). However, low fertility since the late 1970s, especially in urban areas, led many young Chinese to not have siblings to share the traditional elderly care obligations, increasing their burdens (Bai, 2016).
Second, Western technology, political systems, and culture became a referential frame for the modernization of China (Rošker, 2014). Although some scholars attempted to preserve Chinese traditions regardless of modernization (Rošker, 2008), others considered modernization to be a transformation of essence in the sense of general social consciousness, production, and lifestyles (Li, 1986). For example, two research studies conducted in urban (Yang, 2015) and rural areas (Yan, 2011) in China have both claimed that the Chinese society is on the path toward individualization with the pursuit of privacy, independence, choice, and personal happiness being popularized and becoming the new family ideal for Chinese people. However, some research has held a middle ground. For example, Ji (2015) claimed that Chinese individuals are embracing modernization, individual identity, and independence and compromising tradition when necessary, although they still subscribe to patriarchal norms.
Third, modernization changed the power relationships in Chinese society (Zhang and Thomas, 1994). Modernization theory said modernization diminishes the status of older adults and disadvantages older generations (Cowgill, 1974). Consequently, people living in more modernized societies may hold more negative attitudes toward the aging people and the elderly than those living in less technologically developed countries (Bai et al., 2016). In traditional Chinese culture, people respect seniors, regarding them as having wisdom and authority (Hsu, 1953). However, an old Chinese saying: "suffering will occur if you do not listen to seniors' advice, " has been restated nowadays by young Chinese adults as "happiness will last for years if you do not listen to seniors' advice." The Stereotype Content Model categorizes two dimensions of stereotype: warmth and competence (Fiske et al., 2002); and their cross-cultural study found that, globally, people regarded seniors as high in warmth but low in competence, including people from Hong Kong, China (Cuddy et al., 2008(Cuddy et al., , 2009)). Some studies have also shown that, although China is still on the path to modernization, the image and status of older people have already been negatively affected (Chow and Bai, 2011;Bai, 2016). Luo et al. (2013) found that Chinese students were more negative about their older adults than were their American counterparts, even though China is less modernized than the United States. This result cannot be explained by modernization theory. Vauclair et al. (2017) suggested that cultural differences in ageism are more nuanced than suggested by modernization theory. Other studies turned to the role of antecedents of ageism, such as knowledge and anxiety about aging (Donizzetti, 2019). Luo et al. (2013) also claimed that a lack of gerontological curriculum in the Chinese educational system, the caregiving burden faced by the one-child generation compounded with a lack of governmental support for caregiving, as well as the rising youth-oriented consumerist culture may account for Chinese students' more negative attitudes toward the aging people and the elderly.
Fourth, modernization leads to the breakdown of the traditional extended family and the emergence of the "individualistic nuclear" family (Cowgill, 1986;Aboderin, 2004;Khalaila and Litwin, 2012). Yan (2009) said that the individualization of China leads to the pursuit of independence of youth, replacing the traditional "big family" ideal. Traditionally, children lived under the same roof with and took care of their aging parents (Hsu, 1948). Nowadays, the "empty-nest" living arrangement is becoming increasingly common for Chinese seniors, and the number of empty-nest elderly people who are living without children is increasing (Zhan et al., 2006). Some researchers found that compared to non-empty-nest elders (elders living with children), empty-nest elderly people had lower psychological well-being (Silverstein et al., 2006) and poorer mental health (Liu et al., 2007), more loneliness (Cheng et al., 2015), and depression (Zhai et al., 2015). However, other researchers found that empty-nesters were no different from non-empty-nesters concerning loneliness (Lin et al., 2009) and subjective well-being (Zhang, 2020); empty-nest elderly people were higher in subjective well-being (Liu et al., 2014) and life satisfaction (Sun, 2010;Poulin et al., 2014) than non-empty-nest elders. Although these research studies have produced inconsistent conclusions, some researchers blamed young Chinese adults for the emergence of empty-nest elderly people, accusing them of abandoning their filial piety obligations (Pu, 2014).
Filial piety (xiao) is an important cultural concept in traditional China and other Eastern countries. It requires children to make sacrifices for their parents to ensure the continuation of their parents' happiness-not only by respecting older generations but also by taking care of aging parents by living together (Laidlaw et al., 2010). It was said that people in China were looking forward to entering into old age, where they will enjoy prestigious roles and statuses both within the family and in society (Bai, 2016). However, modernization theory maintains that, as society becomes modernized, filial piety diminishes (Cowgill, 1986). According to this theory, ageism may increase with modernization, and children may live independently with parents-leaving many elders as empty nesters. Yet, some researchers have not supported modernization theory completely. According to the Traditional-Modern Theory of Attitude Change (Dawson et al., 1971(Dawson et al., , 1972)), when traditional cultural ideas conflict with modern culture, important traditional constructs will continue in the traditional direction, while unimportant traditional constructs will change in a modern direction. Filial piety is an important traditional construct in China, so it should continue in a traditional way. However, research has found that, explicitly, this was the case but, implicitly, filial piety was changing in a modern direction, affected by modern individualistic western-influenced cultural ideas (Zhang et al., 2021). Bedford and Yeh (2021) discussed the evolution of the conceptualization of filial piety and developed the Dual Filial Piety Model. This model suggests two kinds of filial piety: reciprocal and authoritarian (Yeh, 1997;Bedford and Yeh, 2021). Reciprocal filial piety is defined as affection-based gratitude and respect for parents' efforts, while authoritarian filial piety refers to the relationship hierarchies and role obligations that demand children's compliance with their parents (Yeh and Bedford, 2003;Bedford and Yeh, 2021). Yeh and Bedford (2003) found that college students had higher levels of reciprocal filial piety than authoritarian filial piety. Feng (2013) argued that Chinese filial piety is going to be more reciprocal and less authoritarian; young Chinese adults will still respect elders but will not completely obey them.
Aforementioned contradictory results were almost all from quantitative studies using predetermined, closed-ended questions about the attitudes of the participants. These methods may limit the participants' answers and fail to shed light on their attitudes. In the context of Chinese modernization, aging and low fertility increased elderly care burdens on young Chinese people. Young Chinese people are still the main force of elderly care and will continue this in the near future because, to date, there is no functioning non-familial elderly care system in China (Tu, 2016) and 90% of seniors still rely on familial care (Zhang, 2012). Will young Chinese people be ageist? Will they hold onto some positive attitudes about seniors and seniors' attitude dimensions apart from warmth and competence? If so, what would these attitudes be? Moreover, what exactly are the attitudes of young Chinese adults toward living independently from their parents? Are they less willing to take care of seniors and live together? In general, will modernization lead young Chinese people to be less filial? Understanding these questions is important because they relate to future elderly care problems in China. However, we need more exploratory methods to answer these questions. We assumed that, in the context of modernization and between the conflicts of traditional and modern cultures, answers may be more complicated than filial versus unfilial, ageist versus not ageist, prefer to live independently versus do not prefer. Qualitative methods offer us a way to hear about the different attitudes and the various voices among young Chinese people through open-ended questions about both common attitudes and rare attitudes.
---
MATERIALS AND METHODS
---
Participants
Participants were college students from a university in Shanghai, China. We briefly introduced the purpose and method of the study before several classes in the university. Students who were willing to enroll could either write down their contact information to researchers immediately after the introduction or send a message to the researchers at any other time. In total, 45 participants voluntarily engaged in the study: 38 (84.44%) were women and 37 (82.22%) had no siblings. The ages ranged from 17 to 25 years (mean age = 19.28, SD = 1.74 years). More characteristics are shown in Table 1 and Supplementary Table 1.
---
Researchers
YZh (a woman, 33 years old), JT (a woman, 23 years old), and TL (a man, 23 years old) served as the primary research team; and YZu (a woman, 41 years old) and QH (a woman, 29 years old) provided an outside audit to check on the findings of the research team. In qualitative research, it is crucial that researchers address possible biases that might contaminate the coding and analysis of the data (Stiles et al., 1990). Although they may be firmly committed to honoring the data, no researchers are without bias. Therefore, we tried to articulate these biases at the outset of the study by exploring and discussing our attitudes toward aging and living with parents, as well as our research hypotheses in the team, setting them aside during the analyses, and reflecting on their effect on the analyses by writing reflection memos and discussing these with team members, as has typically been suggested for those involved in qualitative research (Hill et al., 1997).
For readers to evaluate the validity of results, we share the attitudes of the research team: YZh said she agreed that living independently with children is not necessarily bad for old parents, but it could be an active choice of old parents; JT said she thinks being old is unimaginable and lonely, she wants to be with family when she is old, so she also wants to accompany her parents when they are old; TL said he wants to be a filial son and wants to have his own career, but he thinks the two are not contradictory, because living a good life is another kind of filiality. YZu said she believes that young Chinese should respect seniors, but it was unnecessary that they should live together because there will be lots of familial conflicts by living together; and QH said she thinks people could age gracefully and be productive and old parents should not live together with children, when old parents could not take care of themselves, children could hire senior workers to take care of their parents.
---
In-depth Interviews
A demographic form about age, hometown, etc., was provided to interviewees to fulfill before interviews. Semi-structured interview protocols were used with all interviews. Modifications were made after several interviews. Questions in the final protocol are: (1) Motivations to attend this study, e.g., "Why do you want to attend this study?" (2) Attitudes toward older adults and aging, e.g., "What are your perceptions of older adults and aging?" (3) Attitudes toward living independently, e.g., "What are your perceptions of older adults living without children?, " and (4) Coping with aging in the future, e.g., "What will you do when your parents/you become old in the future?"
Each member of the primary research team interviewed 15 participants in Chinese. Each interview lasted from 20 to 50 min. After an interview was finished, it was transcribed (excluding minimal responses, e.g., "Hmm"; but including special situations, e.g., long pause) and analyzed in a week by the interviewer. Transcripts were assigned a code number to maintain confidentiality.
---
Data Analysis
Data were managed and analyzed according to Consensus Qualitative Research Methods (CQR; Hill et al., 2005) in NVivo 12.0 (QSR, Australia).
---
Case Summary
Before coding a certain transcript, a research member read the transcript briefly and created a case summary, which included about 200 words describing the attitudes of aging and living independently of the participant, as well as the reader's initial ideas and feelings after reading the transcript. Although Hill et al. (1997) recommended keeping memos or notes about impressions and comments immediately after the interview, we wrote case summaries after transcription (Miles and Huberman, 1994). The reason for this was that emotional involvement after an interview could sometimes be overwhelming. Case summarizing after transcription provided "a step" distance, creating an emotional connection without getting too involved. However, this does not mean that researchers were forbidden from keeping memos after the interviews. If they wanted to, they were free to do so. These memos and case summaries worked as both search tools for us to search basic information of participants quickly and reflection memos because initial thoughts and feelings were more intuitive, less susceptible to bias and research expectations, and more conducive to finding situations that are different from what we assumed.
---
Coding of Themes
The primary team members examined the first three transcripts individually and placed each block of data related to the same idea into initial themes. Later transcripts were coded according to these initial themes, but when initial themes were not applicable, either theme was modified or a new theme was generated. Disagreements about themes and how to block the data were discussed until the team reached a consensus, which was audited by YZu and QH and then reviewed again on the basis of their comments. Four major themes were identified: (1) diverse attitudes toward older adults, (2) perceptions of reasons that lead to unhappiness/happiness in old age, (3) attitudes toward living with and without children, and (4) attitudes and coping strategies when parents/themselves are old. Results were translated into English.
---
Coding of Categories
After blocking data into themes, the three primary team members independently read data within a given theme and coded them sentence by sentence. Each member then categorized codes and devised hierarchical categories. After that, the team met as a group to discuss their codebooks and to arrive at a consensus about the categories and how to word them. The themes and categories were continually modified throughout the process to reflect the team's ongoing understanding of the data.
---
Audit
Codebooks were then sent to the two auditors (YZu and QH) who read them, suggested additions and deletions and returned them to the primary team for further discussion and revision. The primary team then reviewed each case to make sure they had been consistent over time and to be certain that they had remained true to the participants' perspectives. Revisions were made to the themes and about what was included in the themes for each case. The team added notes where appropriate to help them remember issues and questions that occurred to them, but these were not directly stated in the data. Then every case was examined again by the team members, who arrived at a consensus over changes.
---
Cross-Analysis
Using the consensus versions of the codebooks, the primary team met and conducted cross-analysis for all transcripts. The purpose of doing the cross-analyses was to compare the codebooks to determine consistency across the transcripts. We also wanted to ensure that we were applying the same criteria across transcripts. We reviewed each theme separately and evaluated categories. When a category was identified for certain cases, we returned to the raw data to determine whether the same category also fits other cases. During this process, the team developed dimensions for some categories. The team then calculated the number of participants who mentioned a certain dimension of each category for comparison, although some researchers did not support counting exact numbers because participants were not asked the same questions in the same way; therefore, counting numbers could be misleading (Sandelowski, 2001;Neale et al., 2014). Nonetheless, we counted the number of participants rather than using the labels "general" (if a category occurred in all cases), "typical" (if a category occurred in at least half of the cases), and "variant" (if a category occurred in only one or few cases) as in CQR to replace numbers. This was the case because the number of participants in our study was much larger than in the regular CQR process so that most of the categories were "typical." Thus, only showing labels could not provide enough information for comparison. However, using numbers was not meant to convey generalizability beyond the study population.
---
Stability Check
After we interviewed the 40th participant and analyzed the transcripts, no new themes or categories were added, assuming that the data were saturated or stable. Even though Hill et al. (2005) recommended a sample size between 8 and 15 participants (Hill et al., 2005), we interviewed more participants, because we could not reach saturation with 15 participants. Moreover, to check whether the data were truly saturated, five more interviews were conducted. Each of the categories for the five cases was placed into the existing cross-analysis by the two auditors. No new categories were added. The new cases only changed the frequency of each category. Consequently, we determined that the data were stable and that additional cases were unlikely to change the results in any significant way. In total, we conducted 45 interviews.
---
Bias Control
To ensure greater credibility, first, each team member was conscious of her or his attitudes about aging and living independently. Second, each researcher wrote independent methodological and reflective memos about the impact of her/his attitudes on the collected data. Third, the team met on a weekly basis to discuss the degree to which the researchers attitudes might have influenced the analysis of the data and the level of openness of the participants. Fourth, the team discussed data analyses and translations weekly and reached a consensus on both the results and the translations (Rhodes et al., 1994;Hill et al., 1997Hill et al., , 2005)).
---
Compliance With Ethical Standards
According to the accepted research standards incorporated by this study, the privacy of each participant interviewed has been protected. The research team did not use the real names of the participants, nor any identifiable information. All participants were aware that they were participating in a study, knew the research objectives, and consented to participate in the qualitative research. It was explained to the participants that their participation was totally voluntary and if they chose not to participate or answer certain questions there would not be any negative repercussions. Signed informed consent was obtained for both the interview and audiotaping. Two participants did not consent to be audio-taped; however, their signed consent for notetaking was obtained. At the conclusion of the interviews, the participants were thanked for their participation and contribution to the research and were offered 15 RMB (about two dollars) for participating.
---
RESULTS
Results were translated into English; the team reached a consensus on both the results and translations. Quotations are followed by the identifier of the participant. For a clearer presentation of results, Supplementary Tables 2-5 are provided in Supplementary Materials. In these Tables, comparisons are shown along the same rows. For example, in Supplementary Table 2, the two opposite phrases "lonely" and "not lonely" are in the same row. The number of participants that mentioned "lonely" was 15, whereas only one participant mentioned "not lonely." Thus, the readers can compare the two opposite attitudes about older adults easily and see that more participants said that older adults are lonely.
---
Diverse Attitudes About Older Adults
We divided participants' attitudes about older adults into four categories: attitudes concerning physical/mental health, quality of life (QoL), social support, and personality of older adults. We also assigned valence to each attitude as negative, positive, neutral, or unclassifiable based on the information provided by participants and research members reached a consensus on the valence assignment.
---
Attitudes About Older Adults' Physical/Mental Health
More participants held a negative attitude about the physical and mental health of seniors than those who held a positive attitude. Some participants mentioned that they think seniors are unhealthy physically (n = 5), lonely (n = 15), unhappy (n = 3), anxious (n = 2), feeling hopeless (n = 1), and useless to their families and society (n = 3); in contrast, only few participants said seniors are healthy physically (n = 2), happy (n = 5), and not lonely (n = 1).
---
Attitudes About Older Adults' Quality of Life
We categorized the quality of life (QoL) of seniors into four subcategories: overall QoL, busy with labor work, richness of leisure activities, and economic affluence.
---
Overall QoL
One participant said elders' overall QoL is just so-so, not too bad, not too good; some participants said this kind of life suits seniors well (n = 7). "Even though their lives are not that colorful, with less vigor and vitality, compared to our young people, it suits them well." (#6) However, the sentence may imply participant's latent attitudes that colorful lives did not suit seniors.
---
Busy with labor work
Participants' attitudes lay on a continuum from idle (having nothing to do) to at ease (having something but not too many things to do) to laborious (having too many things to do). These attitudes were held by nearly the same number of participants (idle: n = 11, at ease: n = 10, laborious: n = 10). In this continuum, idle and laborious were problematic in the eyes of the participants, while at ease represented a good living status. However, six participants considered that seniors want to keep themselves busy, although they did not need to do things. From these participants' views, laborious work is not a bad thing because it is seniors who chose to do things rather than being forced to do them.
---
Richness of leisure activities
Several participants mentioned that seniors have rich leisure activities (n = 5); however, many participants described older individuals' lives as in a low or medium level of richness but used different words. As for the low level of richness, some participants described it by using words like "boring" (n = 16), which reflected a negative attitude in Chinese; while some used words like "simple" (n = 11), which reflected a positive attitude in Chinese. As for a medium level of richness, some participants described it by using the word "repetitive" (n = 2), which reflected a negative attitude in Chinese; while some participants described it by using words like "regular" (n = 12), which reflected a neutral or positive attitude in Chinese.
---
Economic affluence
Several participants mentioned the economic affluence level of seniors; however, they used words in opposition: affluent (n = 2) and poor (n = 4).
---
Attitudes About Social Support for Older Adults
All the five participants who mentioned this category held negative attitudes concerning the social support for older adults. They said seniors have bad marital relationships (n = 2), have nobody to talk to (n = 1), have few friends (n = 1), and have no constant companions, for example, children (n = 1).
---
Attitudes About Personal Traits of Older Adults
Participants used several contradictory words/phrases to describe seniors: closed (n = 7) and open (n = 1), have no hobbies (n = 6) and have many hobbies (n = 3), lazy (n = 3) and industrious (n = 6), self-centered (n = 3) and do not care about the self but only care about children and family (n = 14), dirty (n = 1) and clean (n = 1), and dependent (n = 4) and independent (n = 3). Further, seven participants mentioned thrift, one mentioned brave, and another mentioned nagging.
In sum, participants expressed diverse attitudes about older adults, and some participants' attitudes were in opposition or they used varied terms to describe the same thing. However, according to the categories mentioned most by the participants, we summarized participants' general attitude toward the seniors as follows: lonely, live lives with a low or medium level of richness, have poor social support, closed, have no hobbies, industrious, thrifty, do not care about themselves but only care about children and family.
---
Perceptions of Reasons for Unhappy/Happy Late Life
Since participants held diverse yet general negative attitudes toward seniors, what do they think are the reasons why seniors were unhappy and what makes seniors happy? Five categories emerged out of participants' perceptions of reasons for unhappy/happy late life. Four categories were the same as above: physical/mental health, QoL, social support, and personal factors. The novel category was society.
---
Physical/Mental Health as a Reason for an Unhappy/Happy Late Life
In this category, physical health ranked first as the main reason for happiness (n = 11) and unhappiness in old age (n = 6). Moreover, for some participants, feeling lonely (n = 1) and losing control (n = 1) were reasons for unhappiness, while feelings of belongingness (n = 3) and being respected (n = 2) were reasons for happiness.
---
Quality of Life as a Reason for an Unhappy/Happy Late Life
Economy was the chief reason mentioned by several participants as a reason for unhappiness (n = 6) and happiness in old age (n = 5). Other participants said that having hard work to do (n = 3) or having nothing to do (n = 1) were both harmful to older adults' happiness. Not needing to work (n = 5) but having a rich life (n = 4) were reasons for perceived happiness, which, again, indicates the difference between a choice and being forced to do something. Other descriptions of happiness in old age included living a regular (n = 2), quiet and peaceful life (n = 1) without having big stressful life events (n = 4).
---
Social Support as a Reason for an Unhappy/Happy Late Life
Over half the participants mentioned bad social support as the reason for unhappiness in old age (n = 31). In this category, children's lack of filiality was the chief reason (n = 25), including not being around, not accompanying or visiting aging parents, having bad relationships with one's parents, or bringing shame to the family. Some participants did not separate children from other family members and they used the term "family" generally.
Having family conflicts may lead to unhappiness in old age (n = 8). Other reasons mentioned by participants that made older adults unhappy were living alone (n = 4), the death of a partner (n = 1), and living in nursing home (n = 1).
In contrast, 27 participants mentioned good social support as the reason for happiness in old age (n = 27). Children's filiality was the chief reason (n = 21), includes being around, accompanying or visiting aging parents, having good relationships with one's parents, or bringing glory to the family. Having good family relationships (n = 7), having friends (n = 7), having good neighborhood relationships (n = 3), and having people to accompany them (n = 1) may also make older adults happy.
---
Personal Traits as a Reason for an Unhappy/Happy Late Life
Notably, more participants mentioned personal traits of seniors as reasons that made older adults happy (n = 27) than those who mentioned personal traits as reasons that made older adults unhappy (n = 10). "Having hobbies" ranked first in personal factors for happiness in old age (n = 15), while only three participants mentioned "having no hobbies" as a reason for unhappiness in old age. Other personal traits associated with unhappiness and happiness included being closed (n = 3) and keeping pace with the time (n = 2), respectively; not content (n = 1) and content (n = 5), respectively; pessimistic (n = 1) and optimistic (n = 5), respectively; and bad character (n = 3) and good character (n = 1), respectively. Other personal factors for happiness in old age were having an ability (n = 5; e.g., high education), having faith (n = 1), and being resilient (n = 1).
---
Society as a Reason for an Unhappy/Happy Late Life
This category was only mentioned by one participant as a reason for unhappiness in old age. The reasons for an unhappy late life is because-not only the children-however, also the outside society. The whole society treats the elders unfairly; they did not enjoy the benefits they should have enjoyed (#25).
Notably, four participants said that they did not notice any older adults being unhappy; they just saw unhappy older adults in the news. However, one participant was negative about aging and said that all older adults were the same-no one is happier, and no one is unhappier.
In sum, participants' reasons for happiness and unhappiness in old age were almost the same in both content and frequency. However, participants considered personal factors (especially having hobbies) to be more important for happiness in old age as compared to unhappiness.
---
Attitudes Toward Living With and Without Children
Since traditional filial piety is deep-rooted in Chinese culture, and participants ranked children's filiality as the chief reason for the happiness of seniors, we then analyzed participants' attitudes toward seniors living with or without children. Participants compared the cons and pros of the two kinds of living arrangements and considered that they were different in the four categories: mental health, QoL, social support, and personality.
---
Attitudes Toward Influences of Living Arrangements on Older Adults' Mental Health
Many participants stated that it is better for old parents' mental health to live together with children than to live separately from children. First, some participants (n = 12) mentioned that elders living with children were happier than elders living without children because they have children to depend on (n = 10), were less lonely (n = 6), do not have a sense of loss (n = 2), do not need to suffer from missing their children (n = 1), and do not need to worry about their children (n = 1). Some participants said that living with children helps elders feel spiritually supported by children (n = 10), which may also help elders to feel safe (n = 1). Two participants said that, when living with children, elders need to help children do some household duties. This may bring elders a feeling of being useful. One participant said, "Living with children brings older adults a feeling of superiority because her children could take care of her" (#1). Another participant said, "Living without children may lead to some mental illness" (#3). On the contrary, only one participant stated that it is better for elders' happiness to live separately from their children because "they have more freedom; they do not need to depend on children; [and] they can decide what to eat, to buy, when to eat, [and when] to sleep" (#41).
Two participants said that whether an older adult is happy depends on whether his/her children are filial rather than whether his/her children are living with them. Moreover, eight participants stated that being old is lonely, regardless of who you live with, because children are too busy to care for elders regularly: "Although she (grandmother of father's side) lives with us, my parents have to work, and I have classes. We only meet in the evenings. [In] a whole day, what can she do at home? She could not read; she could not watch TV. The only thing she could do is sit there" (#1).
---
Attitudes Toward Influences of Living Arrangements on the QoL of Older Adults
Concerning QoL, three participants said life is more colorful if children are around, another three participants said living with children gives elders things to do, and one participant said living with children enriches elders' lives. In contrast, three participants regarded living with children as a disturbance to elders, and another three participants said it burdened elders. They said elders should be taken care of by their children; however, when living with children, some elders had to take care of their children instead (e.g., cooking): "The nanny with grandchildren. . .although she lives with children and her husband, she does all household duties. I feel she's tired, and she's getting older and older; her children won't help. . .I think she may have some resent. . .but, another nanny, because she didn't live with her children, if she wants, she could also go to visit her children. Usually, she visits neighbors, [I] feel she's very happy" (#28).
---
Attitudes Toward Influences of Living Arrangements on Older Adults' Social Support
Two participants said it is better for family relationships if elders are living with children; while one participant said that if they live together, there will be lots of mother and daughter-in-law conflicts, so it is worse for family relationships.
---
Attitudes Toward Influences of Living Arrangements on Older Adults' Personality
One participant stated that living with children helps elders to be modern and keep pace with the time, although one participant said that it depended on elders and that older adults can have their own lives regardless of their living arrangement.
In sum, more participants thought that living with children was better for elders' mental health than living without children; however, there were perceived equal pros and cons concerning QoL, social support, and personal factors.
---
Attitudes and Coping Strategies When Parents/Themselves Are Old
Results before were general attitudes and perceptions of aging and living independently of participants, but in the last part, we invited participants to take a more "insiders'" view, talking about their parents' aging and their own aging, including living independently from their parents when their parents are old, and living independently from their children when they themselves are old. Below, we compare participants' attitudes and coping strategies separately.
---
Attitudes Toward Living Independently From Parents When Parents Are Old
At the time of interviews, many participants were studying away from home; however, many could not accept separating from their parents when their parents were old (n = 23). They said they would not let their older parents live alone (n = 16) because they would worry about them (n = 1). Ten participants said they could accept living separately from their old parents because it is an evitable trend (n = 5), they did not want to live with them (n = 2), and parents have their own lives (n = 1). Two participants said that their parents could adapt to living without children nearby when they are old. Furthermore, three participants were struggling with this issue: on the one hand, they wanted to take care of their parents; on the other hand, they wanted to have their own lives and pursue their dreams.
---
Attitudes Toward Living Independently From Children When Participants Themselves Are Old
Many participants said they can accept living without children when they are old (n = 34) because it is an evitable trend for parents and children to live separately in the future (n = 4), they do not want to live with children (n = 9), or they do not need to live with children (n = 13). Nine participants said it is good for children to live separately because it reduces family conflicts and children can focus on individual development. Ten participants could accept living separately from children conditionally, as long as their children are living nearby (n = 5), children visit often (n = 4), children are filial (n = 1), and they have others to accompany them (n = 1). Moreover, three participants reluctantly accepted living separately from children: "if they do have to go outside to work, I can't force them to live with me. . .just live my own life" (#16). Only one participant said she cannot be apart from her children when she is old because "the parent-child relationship is the most important thing in the world. Blood is thicker than water" (#42).
---
How to Take Care of Parents When Parents Are Old
Participants provided several strategies to care for aging parents. The strategy mentioned by the most participants (n = 31) was "try my best;" that is, participants would try their best to balance the needs of parents and themselves-the needs of filial piety and independence. This strategy included visiting parents often (n = 20), staying close to home (n = 6), living in the same city (n = 3), traveling with parents (n = 3), calling parents on the phone (n = 3) or video calls (n = 3), asking others to take care of parents (n = 3), and buying things for parents (n = 1). However, the "try my best" strategy implies that they would not live with their parents.
The second strategy was to change parents (n = 12), which includes encouraging parents to have their own lives (n = 9): "[I] tell them to find some hobbies or go outside" (#2) and move to the city where the participants might live (n = 3): "Maybe I'll pick them up if I'm living in a city someday. I'll try not to let them live alone" (#4).
The third strategy was the "if. . .then" strategy (n = 10); that is, participants will make choices about how to take care of parents according to varied situations. For example, "if I have a good development, I will take parents over to the place I live; if I have a bad development, I will go back to the place my parents live" (#43).
The fourth strategy was to change themselves (n = 3). They said they would give up their career in big cities to go back home and accompany their parents when they are old. One participant said that no matter what happens, she will not live with her parents when her parents became old (#25); and one participant said he will try to reach an agreement with his parents about their living arrangement (#12).
---
How to Be Happy When Participants Themselves Are Old
According to most participants (n = 29), when they are old, they will depend on themselves for happiness. Only two participants said that they would try to educate children to be filial to ensure happiness. Many participants said they would get hobbies (n = 22), such as traveling, handcrafts, music, and painting. Some participants (n = 10) said they would make friends expand their social circle beyond their children. Five participants said they would have a good socioeconomic status because that determines QoL.
In sum, although many participants mentioned that they cannot accept living separately from their parents when their parents are old, many could accept living separately from their children when they themselves are old. As for coping strategies, most participants did not actually consider living with parents in the future. They are struggling between filial piety and independence; however, it seems that independence wins. Participants prefer to depend on themselves rather than on their children for a happy later life.
---
DISCUSSION
First, participants held diverse attitudes about older adults. However, we did find that participants regarded seniors as low in competence and high in warmth as claimed by the Stereotype Content Model (Fiske et al., 2002). Instead, the four dimensions we divided in the results were physical/mental health, QoL, social support, and personality. The results suggested a need for an open-ended method in future studies about stereotypes or ageism. Moreover, results revealed that participants held diverse attitudes about older adults, but the general attitudes were that older adults are lonely, financially disadvantaged, have poor social support, lack hobbies, and care about their children more than themselves. It appears that our study supported that young Chinese people held negative attitudes toward older adults. However, was this because of modernization or other antecedents?
An antecedent of ageism is the anxiety of aging (Donizzetti, 2019). From a developmental perspective, being independent is an important task. Since college students have just gained independence, imagining returning to live with one's parents or future children may be a source of anxiety, which then reinforces negative attitudes toward aging. However, whether being independent is an important developmental milestone is culturally dependent. Many researchers have shown that traditional Eastern countries emphasize interdependence more than independence (Hsu, 1953;Shweder et al., 1984;Markus and Kitayama, 1991). Jensen (2008) outlined the cultural-developmental template to illustrate that the development of three Ethics-Autonomy, Community, and Divinity-varies across cultures. She gave an example of religious conservatives and showed that there may be some decrease in the Ethics of Autonomy over their lifespan because of the emphasis on renouncing self-interest. Inference in reverse, the negative stereotype in our research may still be a result of modernization, which leads to the individualization of China and the increasing need for independence among young Chinese people.
Another antecedent of ageism is the knowledge of aging (Donizzetti, 2019). However, we will argue that it may not be the insufficient knowledge of aging but the "wrong" knowledge of aging that leads to ageism. In our results, participants regarded personality, such as being independent, modern, and keeping pace with the times as positive and considered personal traits, especially having hobbies to be more important for happiness in old age as compared to unhappiness. This is in accord with the dialogs of active aging that being old is not necessarily negative, while seniors could be active, healthy, independent, and productive (Carr and Weir, 2017). The new dialogs of aging are sweeping the world in modernized areas; however, it has an implication for individualism by applying a prescription that independence is normative and "good" in old age (Ranzijn, 2010). Ranzijn (2010) argued that the active aging dialogs may paradoxically reinforce negative stereotypes of seniors as ill, dependent, and non-productive; it is also underpinned politically and economically to empower individual responsibilities but reduce the burden on family and society.
Second, it seems the two contradictory values-filial piety and independence-coexist in Chinese young adults. For example, we found that children were the most important reason for older adults' happiness or unhappiness. Living with children was considered better for elders' mental health than living without children. These results implied that young Chinese adults may still be affected by filial piety-the responsibility to ensure old parents' happiness. This was confirmed again in the results that participants could not accept living separately from their parents when their parents are old. On the contrary, many of them could accept living separately from their children when they themselves are old. Moreover, they prefer to depend on themselves rather than their children for a happy later life. This phenomenon has also been found in other cultures. For example, Mount (2017) found that middle-class Indian women were pressured to conform to the facets of traditional womanhood while also aligning themselves with modernity.
However, when tradition meets modern, what will win? The Traditional-Modern (T-M) Theory of Attitude Change argues that individuals are motivated to reduce the conflicts between traditional and modern cultures by changing their attitudes to semi-traditional or semi-modern: more "important" traditional concepts will change in a traditional direction, while the "unimportant" concepts will change in a modern direction (Dawson et al., 1971(Dawson et al., , 1972)). Since filial piety is ranked as the most important traditional concept by both lay people and experts in China (Zhang and Weng, 2017), according to T-M theory, it should change in a traditional direction; that is, people may place more value on filial piety when traditional meets modernity. However, our results refuted this speculation and revealed the superiority of independence, which was consistent with the findings of Zhang et al. (2021) that implicitly, all traditional concepts are changing in a modern direction, affected by Western individualistic ideas.
For example, some participants said regardless of who you live with, being old is lonely because children are too busy to take care of the old parents. Instead, seniors had to take care of their children if they were living together. It seems that the relationship between children and parents is changing in a direction as suggested by modernization theory that modernization diminishes the status of seniors (Cowgill, 1974(Cowgill, , 1986)). Moreover, although many participants mentioned that they cannot accept living separately from their parents when their parents are old, most participants did not actually consider living with their parents in the future. They said they will try their best to visit parents often, stay close to home, live in the same city, and make phone/video calls. However, the "try my best" strategy implies that parents will be sacrificed, while filial piety requires children to sacrifice. Nonetheless, we do not think this implies the death of filial piety, as some researchers have claimed (Yan, 2011). Although preferring not to live together, our participants said they would still be filial to parents in other ways. Therefore, perhaps just the construct of filial piety has changed. Living together with parents was not regarded as a necessary component of filial piety. One survey from the 1990s showed that Chinese residents preferred to live close to, but not necessarily with, one's parents (Tu, 2016). Prior results of dual models of filial piety found the weakening of authoritarian filial piety but the maintenance of reciprocal filial piety (Yeh and Bedford, 2003;Feng, 2013).
In conclusion, our results suggest that Chinese college students are affected both by traditional filial piety and individualism; however, of the two, they seem place greater value on independence. Moreover, traditional filial piety is changing in a modern direction, affected by Western ideas of individualism: the status of older people is diminishing, and living with one's parents is not regarded as a necessary component.
---
Implications
The study had several implications. First, our results suggested that young Chinese people held generally negative attitudes toward aging in four dimensions: physical/mental health, QoL, social support, and personal factors rather than in the two dimensions of the Stereotype Content Model-warmth and competence (Cuddy et al., 2008). Further studies about stereotypes should not be limited in their scope and could use more open-ended methods. Second, we should be cautious about active aging dialogs by assuming that being healthy and independent are normative in old age and thus reinforcing negative stereotypes of seniors as ill and dependent (Ranzijn, 2010). Third, young Chinese adults prefer to be independent and live independently from their parents; therefore, we predict that the Chinese elderly care model will become more Westernized in the future. However, non-familial-care in China is underdeveloped, and the increasing number of empty-nest elderly people will require care (Tu, 2016). Consequently, the Chinese government and non-government institutions should determine how to develop non-familial care for elders. Fourth, young Chinese adults are also affected by traditional filial piety, but the content of filial piety is changing to compromise with the need of pursuing personal development and happiness. Thus, we could accommodate the change by providing children more opportunities to live near their parents instead of living with their parents, to video chat with them, and to pay visits to their parents.
---
Limitations and Strength
This study had some limitations. Its main limitation is sample bias. As a qualitative study, it only enrolled a small number of participants. Specifically, participants came from a university in Shanghai, were in their 20s and many of their parents were in their 40s, and most of them were female and an only child. Therefore, this study's generalizability is limited and its results cannot be generalized to other groups. This limitation may be inevitable for all qualitative research. Here, qualitative research findings are not intended to be generalized, but rather they aim to shed light on the attitudes or experiences of participants, revealing other voices. Our research also aims to share the various attitudes toward aging and living with parents of Chinese youth; however, whether these voices are representative of all Chinese youth is still questionable. The risk of sample bias means that we should be cautious about our conclusions, which could be reexamined by quantitative studies in the future. Second, we presented some information that we think may affect the validity of the results, such as participants' motivation for taking part in the study and researchers' gender, age, expectations, and bias; however, how they may affect this study's validity is unclear. Future qualitative methodology studies could explore these aspects more deeply. Yet, providing background information is still important for readers to grasp the context of the "story." Third, participants answered questions about their assumed attitudes and coping strategies when their parents and themselves get old; longitudinal studies that address participants' attitude changes in the future should be considered.
Despite these limitations, this study advanced an exploration of the attitudes of young Chinese adults about aging and living independently and, more specifically and deeply, helped to better understand Chinese youth undergoing social change. The study also suggested other dimensions of age stereotyping and methods to study stereotypes, pointing out the potentially negative effect of active aging on ageism and future elderly care models and strategies.
---
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because some interviews may include identifiable personal information. Requests to access the datasets should be directed to YZ, [email protected].
---
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by East China Normal University. The patients/participants provided their written informed consent to participate in this study.
---
AUTHOR CONTRIBUTIONS
YZh contributed on research design, interviews, data analysis, and manuscript writing. JW contributed on research design, theory construction, and manuscript revising. YZu contributed on participants recruitments, interviews, and data analysis. QH contributed on interviews and data analysis. All authors contributed to the article and approved the submitted version.
---
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2021. 609736/full#supplementary-material Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. |
The study examined the argument that cohabitation as a form of union increases physical violence victimization among women. The study's aim was to assess the association between physical violence and other socio-demographic factors that influence physical violence among women. Self-reported data were extracted from the 2016 Uganda Demographic and Health Survey (UDHS), with a sample of 2479 couples, from the couple file. Chi-squared tests and multivariate Firth-logit regression models were used to examine the relationship between intimate partner violence (IPV) victimization and marital status controlling for other social-demographic factors. There was no significant evidence that women in cohabiting union have a higher risk of exposure to physical violence in the Ugandan context. The risk of experiencing physical violence perpetration varied by birth cohort, with the most recent cohorts exhibiting a slightly higher risk of experiencing partner violence than previous cohorts. Significant factors found to be associated with an increased risk of experiencing IPV included being in the poorer, middle and richer compared with the poorest wealth tertile of income, residing in Eastern or Northern regions compared with the Central region, being affiliated to the Catholic faith compared with Anglican and having five or more children compared with 4 or fewer children. In conclusion, there is no evidence that physical violence is more pronounced among women in cohabiting unions compared with married women in Uganda. | Introduction
Violence in intimate unions has been widely researched since the 1970s and gained momentum after Makepeace pioneered a study on violence among dating couple (Makepeace, 1981). In the 1980s, cohabitation as a form of first union started rising in both developed and developing countries. Indeed, cohabitation is said to be influencing nuptiality patterns as a first co-residential union in recent times (Kiernan, 1991(Kiernan, , 2001;;Mokomane, 2005Mokomane, , 2013;;Posel & Rudwick, 2013). In response to these trends, recent research on family demography has become increasingly interested in understanding differences between cohabitation and marriage along several dimensions. Accompanying this paradigm shift is the argument in the emerging literature that, because of their characteristics, cohabiting unions are more violent than marital unions (Kenney & McLanahan, 2006;Wong et al., 2016). In a study conducted in Peru, Flake reported cohabitating union as a family-level risk marker to increase a woman's likelihood of abuse (Flake, 2005). In Uganda, as in many parts of Africa, both violence in intimate unions and cohabitation are on the rise. For instance, the proportion of women aged 15-49 in cohabiting unions increased from 14% in 2001 to about 27% in 2011 (Uganda Bureau of Statistics (UBOS) & ICF International Inc., 2012; Lwanga et al., 2018). Notable, however, is the increase in the proportion (27%) of intimate partner sexual violence among women (Wandera et al., 2015).
While, on the one hand, cohabitation can offer intimacy and a family-like environment with egalitarian family structures, on the other it offers a lower level of economic consolidation and a weakened relationship short of an intrinsic barrier against union separation. As a result, less violence among women in cohabiting unions than in the married would be expected. Previous studies have demonstrated the contrarythat cohabiting women have an increased risk of experiencing violence in union than married women (Wong et al., 2016). Using data from Hong Kong medical charts between 2010 and 2014, Wong et al. also found cohabiting women to be nearly two times as likely as married women to experience multiple injuries and physical violence. Nock (1995) theorizes that the difference in the level of intimate partner violence (IPV) among cohabiting and married unions is that marriage is governed by an institution whose relationship is enforced by social and legal rules unlike cohabitation. Ellis, for instance, argues that the presence of marital norms and greater investment in union common among the married contribute to lower levels of violence in marriage (Ellis, 1989). Prior research has also been consistent. Wilson and Daly (2021), for example, argue that the social and financial costs of ending a marriage are higher than the costs of ending a cohabiting relationship, thus the marrieds invest a lot in developing strategies used to mitigate IPV.
In Uganda, where research on cohabitation and IPV is relatively new, past research on the relationship between cohabitation and violence in union has been limited. However, available studies typically involve investigating cohabitation and union dissolution (Lwanga et al., 2018). With regard to intimate violence, past research has examined the link between empowerment, partners' control and IPV (Kwagala et al., 2013(Kwagala et al., , 2016;;Wandera et al., 2015;Gashaw et al., 2018;Gubi et al., 2020). Elsewhere, there have been attempts to link IPV and pregnant women (Gashaw et al., 2018). However, less studied is whether being in a cohabiting union is, at first sight, a licence to intimate partner physical violence; and whether the relationship varies between birth cohort. Numerous studies have investigated the relationship between IPV among women and empowerment, modern contraception, maternal health services and partner's behaviours. However, there is no known study that has specifically focused on the study of violence against intimate female partners. The current study used self-reported data on physical aggression against currently married or cohabiting women extracted from the 2016 Uganda Demographic and Health Survey (UDHS) to examine whether IPV is more pronounced in cohabiting than in married unions; to assess whether the association varies between birth cohorts; and to examine other factors that influence physical violence among women in union.
---
Theoretical consideration
A number of theories and research findings on how cohabitation or marriage may influence IPV among women have been inconsistent and inconclusive. According to the social learning theory, observations of how parents and significant others in intimate relationships behave provide an initial learning of behavioural alternatives appropriate for such relationships (McLanahan & Sandefur, 1994;Clarkberg et al., 1995;Axinn & Arland, 1996). This theory fails to separate the effect of witnessing and experiencing violence in the natal family. However, the argument behind it suggests that, if a family of origin managed stress and frustration by using anger and violence, children from such a home environment would be at a great risk of exhibiting the same behaviours, witnessed or experienced when growing-up. This argument is consistent with the intergeneration theory of violence (Besemer, 2017). In the present context, the theory exposes children to violence and also teaches them the use of partner violence as being acceptable and an effective way of solving problems. This description is consistent with patriarchal norms as well as beliefs that are the foundation of male-to-female partner violence enshrined in the malepeer support theory, where anticipated rewards seem to be greater than social and non-social costs (DeKeseredy & Schwartz, 2016). The social context theory posits that lack of social support to integrate cohabiters in society and the lower support from family and friends than for marriage, can lead to intimate violence (Skinner et al., 2002). They argue that because parents and kin are not involved in the decision to cohabit, they are unlikely to be engaged whenever there is union instability. In addition to reduced social support, cohabiters have other issues, such as lack of commitment, which inhibits couple investment and creates a context for diminishing relationship quality among cohabiters; and this contextual explanation appears to be related to the differential selection perspective. As cohabiting unions become larger and more children are born within such relationships, cohabitation may be taking on more of the functions of marriage (Cherlin, 2004;Perelli-Harris et al., 2019).
The three study hypotheses were: first, that there is no significant difference in IPV between cohabitation and marriage unions; second, that where differences do occur, they are likely to be across birth cohorts; and third, that couples with higher levels of education are likely to have lower rates of violence.
---
Methods
With permission from the ICF International website, data were obtained from the 2016 Uganda Demographic and Health Survey (UDHS). The DHS surveys are currently part of the worldwide survey programmes, and are a source of nationally representative data capturing individual-and household-level socio-demographic, health and sexual activity, maternal and child health, mortality, fertility, family planning and nutrition data. The Uganda DHS was implemented by the Uganda Bureau of Statistics (UBOS) with technical assistance from ICF International and funded by the United States Agency for International Development (USAID). Data were collected from a sample of female respondents aged 15-49 and male respondents aged 15-54, selected from 112 administrative districts (UBOS & ICF, 2018). The survey was based on a probabilistic sample originating from multistage cluster sampling and was stratified by rural and urban areas. The DHS programme has been collecting information on intimate partner violence in Uganda since 2006 using a domestic violence module which addresses women's and men's experience of interpersonal violence (UBOS & Macro, 2007).
In violence-related studies, it is more likely for one partner to report having experienced violence than for both to agree that they have ever experienced it (Szinovacz, 1983). This is based on the claim that individuals are more likely to report no violence where it has been experienced than to report violence where it hasn't. This is particularly true in a patriarchal society where anecdotal information has it that women believe that a husband is justified in beating his wife. In response to this assertion, the study's dependent variable (the indicator of violence) was constructed using couple data. Couple data were obtained by merging data from women and their partners living within the same household yielding a sample size of 2479 couples.
---
Measures of outcome variable
Physical intimate partner violence (IPPV) is a dummy variable created from a general question asked to all men who had ever been or who were currently in a union: 'Have you ever hit, pushed or shook, slapped, punched with fist, arm twisted, kicked or dragged, strangled or burnt, or done anything else to physically hurt your (last) (wife/partner) at times when she was not already beating or physically hurting you?' Based on the man's response, a dummy variable was created indicating whether he physically violated (less severe or severe) his wife or partner. In this paper, whoever perpetuated 'less severe' or 'severe' violence was coded as 1 (Yes) and as 0 otherwise.
---
Measures of explanatory variables
The main explanatory variable was marital status categorized as 'married' or 'living with a partner' (cohabiting). This was included in the model as a categorical variable indicating whether a women was married (coded as '0') or living with a partner (cohabiting; coded as '1'). Other independent variables included birth cohort, wealth index, education, type of residence, region, children ever born, working status, religion and a wife/female partner earns more than her husband/male partner. These variables were selected based on previous studies (Wong et al., 2016). The variable 'birth cohort' was generated from current age, categorized as 2001-2005, 1996-2000, 1991-1995, 1986-1990, 1981-1985, 1976-1980, 1971-1975 and 1966-1970. The wealth index was based on couples' combined income grouped as poorest, poorer, middle, richer and richest. Education level was categorized as no education, primary, secondary and post-secondary but modelled as less than secondary education, those with secondary education and those with above secondary education. Religious affiliation was categorized as Catholic, Anglican, Pentecostal, Seventh Day Adventist (SDA) and other. Type of residence was grouped as rural and urban; employment was coded as working and not working. A separate dummy variable for whether the female respondent earned more than her husband/partner was grouped and re-coded as yes and no; and years since union (cohabitation or marriage) was grouped as 0-4, 5-9, 10-14, 15-19, 20-24, 25-29 and 30 years.
---
Statistical analysis
Frequency distributions were used to describe and summarize the characteristics of the women in the sample. Then, the characteristics of cohabiting and married women were compared. The relationship between the dependent variable measured whether or not a woman was physically abused and explanatory variables were established at a bivariate level and tested using the chi-squared test, set at p<0.05. In the data, about 29% of men reported having perpetrated physical violence, indicating that this is a rare event and a possibility of biases resulting from perfect separation and the maximum likelihood estimation method (Firth, 1993;Coveney, 2008;Rahman & Sultana, 2017). Perfect separation usually happens when the outcome variable separates the predictor variable. For these two reasons Firth's panelized logistic regression models were used with explanatory variables to examine the context of physical (IPV) associated with women in cohabiting union in comparison with the married (Heinze & Schemper, 2002;Coveney, 2008).
The results for the panelized model are presented in the form of odd ratios (OR) with their corresponding 95% confidence intervals. While it is important to apply weights to account for complex survey design, clustering and stratification, this was not done because Firth-logit estimates do not support the svy prefix (an acronym used to instruct STATA to account for the complex survey design used in data collection). The fitted model was subjected to the link-test to examine whether the explanatory variables were specified correctly and also assess the goodness-of-fit of the model (Cleves et al., 2010;Hilbe, unpublished;Kohler & Kreuter, 2012). The test uses the hat and _hat-squared statistic. When the model describes the data correctly and is appropriate, the hat-squared should not be significant (_hat-squared, p>0.05). Before fitting the model, a multi-collinearity test among explanatory variables (results not presented) was conducted. The variable 'duration in a relationship', though significant at bivariate level (results presented in Table 2), was found to be highly correlated with the variables 'children ever born' (r=0.6649) and 'birth cohort' (r=0.8126). The interest in this variable (duration in a relationship) was to create an interaction effect with the variable 'current marital status', which would help to test for selection bias. However, since the main explanatory variable (current marital status) was not significant, the interaction term would most likely not be significant, thus in modelling this variable was dropped. Relatedly, the variables 'children ever born' and 'birth cohort' had a positive correlation (r=0.5993), when attempts were made to remove birth cohort and keep the number of children ever-born in the model there was a negligible change in the model diagnostic test results, with _hat-squared still being insignificant. Consequently, the variable (birth cohort) was put back in the model. In statistics literature, missing data in logistic models influence regression coefficients, standard errors and statistical power. However, in this study it was assumed that missing data were missing completely at random (MCAR) and did not bias inferences (Houchens, 2015;Mohamed Reda & Mohamed Gamal, 2018).
---
Results
---
Distribution of respondents by socio-demographic characteristics
Table 1 presents the distribution of respondents by socio-demographic factors. Nearly 29% of the male respondents in the couple sample had perpetrated physical violence against their partners. Among all women included in the couple data, approximately 41% of the respondents were affiliated to the Catholic and 3% to other minority religious groups. The distribution of the women by wealth index shows that nearly 24% were in the poorest category and 15% were in the richest tertile of income. The majority of the women had primary education (60%) while 6% had no education. The majority of the women (69%) were born between 1976 and 1995, and had given birth to no more than 5 children. Approximately 18% lived in the Central and 27% in the Western regions. Nearly 23% of the women had been in union for no more than 4 years and 3% for more than 30 years. The majority of the female respondents (98%) were working. About 11% of female respondents reported earning more than their partners/husbands and 72% said that they earned less.
---
Differentials in experience of intimate physical violence by socioeconomic characteristics
Table 2 presents the differentials in victimization by selected socioeconomic variables. Whether married or cohabiting, number of unions, education level and religious affiliation were not significantly associated with physical IPV. The prevalence of the perpetration of physical violence by married men was nearly 30% and about 27% by cohabiting men. Physical violence varied significantly by wealth status (χ 2 =14.3, p=0.006), birth cohort (χ 2 =40.6, p<0.001), type of residence (χ 2 =4.02, p=0.045), region of residence (χ 2 =19.13, p<0.001), number of children ever-born (χ 2 =50.01, p<0.001) and duration in union (χ 2 =63.5, p<0.001); but was weakly associated with women who were working (χ 2 =2.99, p=0.083) and wives/female partners earning more than the husbands/male partners (χ 2 =4.81, p=0.090).
---
Multivariate results
In isolating the net effects of each independent variable on physical IPV, a final model was built based on the identified predictors explained by the bivariate analysis. In this case, all significant independent variables at the bivariate level were included in the model, in which the dependent variable was the perpetration of physical violence in union. These included wealth index, birth cohort, type of place of residence, region of residence, children ever born, currently working and woman earns more than her husband/partner. Education level and religious affiliation were not significant; however, education level and religious affiliation were found to be significant in Uganda among married women (Wandera et al., 2018). Current marital status was not significant, but it was found to be significant in Hong Kong (Wong et al., 2016), and this is central to the main argument behind this study. Based on these studies, current marital status, education level and religious affiliation were also included in the final model. Table 3 presents the results of the Firth-logistic model. The table shows that the odds of women in the poorer wealth tertile experiencing physical violence were 1.96 times higher (95% CI=1.29-2.98, p=0.001); 1.75 times significantly higher for the middle income tertile; and 2.0 times higher for the richer income tertile (95% CI=1.28-3.13, p=0.002) relative to women in the poorest category. The odds of experiencing physical violence decreased among women born in 1996-2000 (OR=0.19; 95% CI=0. 04-0.92, p=0.04), 199104-0.92, p=0.04), -1995 (OR=0.24; (OR=0.24;95% CI=0.05-1.13, p=0.07), 1981-1985 (OR=0.19;95% CI=0.04-0.90, p=0.036), 1976-1980 (OR=0.22;95% CI=0.05-1.06, p=0.059), 1971-1975 (OR=0.25;95% CI=0.05-1.21, p=0.084) and1966-1970 (OR=0.22;95% CI=0.04-1.11, p=0.067) compared with those born in 2001-2005.
Furthermore, there was increased likelihood of experiencing physical violence among women from both the Eastern (OR=2.23; 95% CI=1.45-3.43, p<0.001) and Northern (OR=1.84; 95% CI=1.14-2.98, p=0.013) regions compared with the Central region. Women affiliated to the Catholic Church were more likely to experience physical violence than those affiliated to the Anglican Church (OR=1.39; 95% CI=1.04-1.86, p=0.026). In addition, women with six or more children were nearly two times more likely to be victims of physical violence relative to those who had fewer than six children (OR=2.1; 95% CI=1.48-2.97, p<0.001). Although there was no significant difference between married women and those in cohabiting union, women in cohabiting relationships were 1.18 times more likely to be victims of physical violence (95% CI=0.87-1.59, p=0.287). Regarding the diagnostic test of the model, the specification error results demonstrate that the Firth-logit model was well specified, as predicted by the hat and hatsq statistics (hat: p=0.019; _hatsq: p=0.254).
---
Discussion
This study addressed three questions: 'Is intimate partner violence (IPV) more pronounced in cohabiting than in married unions?'; 'Does the association between marital status and IPV vary across birth cohorts?'; and 'What other factors influence physical violence victimization among women in union?'. No significant difference was found in physical IPV victimization between women in cohabiting and married unions. These results contradict the findings of Wong et al. (2016) in Hong Kong, who found cohabiting women to be 2.0 times more likely to suffer from physical violence than married women. The insignificant difference in the level of physical violence experienced by cohabiting compared with married women may be understood from four arguments. The first is the transition of the Ugandan society from being highly patriarchal to become more egalitarian. This has led to status compatibility, which negates the power-control theory (of status incompatibility in intimate relationships). The second is that an increase in cohabitation is currently being experienced in Uganda, implying that marital status is undergoing social change. The implication of this, as suggested by Kiernan (2001), is that the stages of cohabitation can be described as a partnership transition. Traditionally, it was taken to be a deviant phenomenon practised by a small group of people; then as a probation stage to assess couple's commitment to marriage; later as a socially accepted alternative to marriage; and finally, as indistinguishable to marriage. In recent decades, the growing tolerance of cohabitation by the Ugandan society might also explain the insignificant difference in experience of physical violence in cohabiting compared with married women. Third, in the Ugandan context, entry into cohabitation or marriage is unlikely to depend on natural selection theory, where partners showing low level or no violence would enter marriage and those in abusive relationships enter cohabitation, as it used to be in the past. Fourth, as cohabiting unions become larger and children are born within a relationship, cohabitation may be taking much of the functions of marriage, which suppresses the would-be difference in IPV (Cherlin, 2004;Perelli-Harris et al., 2019).
It is surprising to find that women of poorer or middle income status, or those of richer income status, were more likely to suffer from physical violence than those of poorest income status. This may be explained by two arguments. First, women of poorest income status may share available family income between family expenditure and investment, or may engage in a joint family business with their partners, but as women become financially better-off, they may decide to be financially independent. As a result, such women may follow the equality principles for power in a relationship and if there is tension and conflict, it could increase or worsen into physical violence. Second, social status and access to income might affect the distribution of power and control within a relationship, leading to status incompatibility and reversal. If this is in favour of women in a patriarchal society, where they are taken to be inferior, it might make them vulnerable to IPV.
The reason is that men can feel threatened by wives/partners who outrank them economically and socially (Buzawa et al., 2015;Meizer, 2002).
The lower levels of physical violence victimization for older cohorts compared with younger ones is not surprising and might be explained by two perspectives. The first is the life course development viewpoint. In Uganda, as with other societies, older people are more likely to possess positive relationship skills than younger ones. In addition, they are less likely to use violent behaviour when dealing with conflicts in intimate or romantic partnerships (Wekerle & Wolfe, 1999). Second, younger women might be in different types of relationships with varying levels of intimate partner violence and commitment (Wiersma et al., 2010). The study found that women from Eastern and Northern Uganda were at a higher risk of experiencing physical violence than those in the Central region. The risk was 2.23 times for the Eastern and 1.84 times for the Northern region. This is not surprising given that these two regions have high rates of child marriage (marriage before 15 years) compared with either the Central or Western region. In this case, the inequitable gender norms that give rise to child marriages may increase the risk of conflict and physical violence (Kidman, 2017). Some characteristics of women might increase the risk of experiencing physical violence. Results from this study show that being affiliated to the Catholic faith is a risk factor for IPV, the odds increasing by 39% compared with being affiliated to the Anglican faith. The effect of other religious denominations was not significant. This is surprising because one would expect religious people to have lower rates of IPV victimization. Increased risk among Catholics could be explained by the difference between religious affiliation and religiosity. Women could be affiliated to the Catholic faith, but attendance at religious services, which has been shown to be associated with lower rates of IPV, could be low. Women with six or more children were found to suffer from physical violence more than those with five or fewer children. Three perspectives are advanced to explain this finding. First, an increase in number of children might cause emotional and economic strain. Second, it could mean that child care attention is divided; and third, it might coincide with advance in age, which is often associated with men having extramarital relations. All these might lead to conflict and consequently physical violence. In most societies in Uganda, issues regarding physical violence are always limited to the couple and to the paternal aunt.
In summary, this study used couple data and the Firth-logit model to assess whether physical intimate partner violence victimization is more pronounced among women in cohabiting union than those in married couples. It also assessed whether the association varied across birth cohorts, and whether other factors influence physical violence among women in married and cohabiting unions. These results will be useful to inform policy dialogue and formulation, given the rising trend in domestic violence in Uganda. Future studies should endeavour to collect more data to explore further the linkage between cohabitation as a form of union, education, type of place of residence, work status, women's income being higher than husband/partner's and physical violence. There are some limitations that can be addressed for future studies. First, this study used self-reported data, and in this case IPV perpetration could be lower due to recall bias, sensitivity of reporting violence perpetration and the humiliation of doing so. Some of the perpetrators could have withheld information regarding their private experiences because of the culture of silence concerning IPV and union. Second, the data used were cross-sectional in nature and therefore the reported results are associations only, and do not imply a causative relationship.
In conclusion, there is no evidence that a woman in a cohabiting relationships has an increased risk of experiencing physical violence compared with women in marriage in Uganda. The study findings suggest that IPV victimization among women in Uganda is influenced by birth cohort, wealth, residing in Eastern and Northern regions, affiliation to the Catholic faith and have six or more children.
---
Conflicts of Interest. The authors have no conflicts of interest to declare.
Ethical Approval. This study used secondary data and the authors declare that all procedures contributing to this work conform with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. |
Context Scholars across holistic, transdisciplinary, place-based fields of research, such as landscape ecology and social ecology, have increasingly called for an 'all-hands-on-deck' approach for transformations toward greater sustainability of social-ecological systems. This Perspective showcases organizational transformation toward sustainability in the context of a research network dedicated to place-based, socialecological research in Europe. Objectives Using the European LTER research infrastructure (eLTER RI) as a case, we analyze recent organizational-level shifts motivated by desires to increase sustainability impact. These shifts include knowledge integration between the natural and social sciences, stakeholder engagement, and a reformulation of administrative guidelines and practices. Methods Following a program evaluation, new conversations led to new initiatives in the eLTER RI. As researchers who were involved in the program evaluation and the development of new initiatives, we rely on our professional experience and participant observation to provide insights about this process and its developments. Results Recommendations from a recent assessment that critiqued and provided recommendations for the research infrastructure have recently been implemented in the eLTER RI. eLTER has leveraged a unique and timely opportunity-formal recognition and project funding by the EU-to upscale and standardize its infrastructure by creating novel protocols and enacting steps towards implementation. Conclusions This Perspective demonstrates how eLTER's research agenda and related protocols have evolved to better integrate multiple knowledge types, promote stakeholder integration into research, and foster greater equity and reflexivity in doing science, all of which are considered necessary to increase sustainability impact. We conclude by considering current and potential future challenges. |
daunted at the range and complexity of threats to sustainability, particularly at the landscape scale (Müller et al. 2010;Opdam et al. 2013), there are scientifically justified reasons for hope emanating from the social-ecological systems (SES) research community. For example, scholars have recently collected a host of examples of bottom-up science-society collaborative initiatives whose activities have led communities towards more positive, sustainable futures (Bennett et al. 2016). This Perspective article uses similar logic and projects similar optimism, recognizing the benefits of learning from "bright spots" (sensu Bennett et al. 2016) at a time when we are coming to terms with the fact that humanity is dangerously exceeding planetary boundaries (Rockstrom et al. 2009;Murphy et al. 2021). In this essay, we offer an example of organizational transformation toward greater sustainability, building on three decades of social-ecological research and innovation in science administration.
Social Ecology, the intellectual tradition with which we identify, aims "to generate the knowledge necessary to understand this [sustainability] crisis and to react to it in the sense of helping establish the 'ought' state of societal nature relations" (Fischer-Kowalski & Weisz 2016, p.18). Like Landscape Ecology, Social Ecology studies human-environment interactions at the landscape scale (Hausknost et al. 2016;Dirnböck et al. 2013;Haberl et al. 2006). We view these fields of study as overlapping and complementary; they both adopt holistic, transdisciplinary research approaches and are both concerned with addressing questions of sustainability (Linehan and Gross 1998;Naveh 2000Naveh , 2005;;Wu and Hobbs 2002;Tress et al. 2004;Wu 2008). The subject matter, scale of analysis, research methodologies and-with particular pertinence for this essay-the normative values and assumptions, are often shared between the two fields, despite evolving via different intellectual histories.
Recent scholarship has enjoined researchers to apply their skills towards addressing the environmental degradation and social inequities that accompany exceeding planetary boundaries (Meadows 1999;Abson et al. 2017). These researchers call for systemic transformations, based on the idea that, in order to catalyze effective change, scientists should focus on addressing deeper societal trends driving environmental degradation (i.e., deep leverage points) rather than trying to change easier, though ultimately superficial, characteristics (Meadows 1999;Abson et al. 2017). Meadows (1999) suggested that acting on such strategic leverage points could have a profound ripple effect on society, leading to broader systemic changes. Along these lines, Abson and colleagues proposed three realms of leverage for sustainability research: strengthening human-nature interactions, reconfiguring organizational dynamics, and sustainability-related knowledge creation and use. Based on these premises, they call for research and knowledge production on "interventions that simultaneously address organizational reform, human-nature interactions, and knowledge productions" (Abson et al. 2017: 9).
Towards the same goal of catalyzing sustainability transformations, scholars have called for greater transdisciplinarity (TD) in research, in which scholars, practitioners, and stakeholders co-produce knowledge about systems in order to collaboratively advance integrated sustainability knowledge of the system and the direct uptake of this knowledge into planning, policy, and management (Schapke et al. 2018;Schneider et al. 2019). SES research is interand transdisciplinary by nature and the need for effective collaboration between SES research and practice and stronger science-policy interfaces have been emphasized (Biggs et al. 2021). Results of a study of 31 projects found that practitioners of TD approaches conceptualized the impact of their work in three ways: (1) advancing the knowledge necessary for "…more informed and equitable decision-making, (2) fostering social learning for collective action, and (3) enhancing competencies for reflective leadership" (Schneider et al. 2019, p.26). While these are clearly desirable steps towards sustainability transitions, assessments of long-term systemic change were more ambivalent; TD projects were not able to claim more meaningful impact; this was due to the complexity of the SES in which they worked, the broad diversity of actors and interests affecting the system, and the difficulty of assessing these interacting and overlapping elements.
TD research is only as robust as its ability to incorporate different types of knowledge (e.g., interdisciplinary, practical know-how, Indigenous ways of knowing, etc.) in a single project or endeavor (Godeman 2008;Pohl et al. 2011;Lam et al. 2021;Straub et al. 2021). The integration of natural sciences and social sciences knowledge is a common challenge Vol.: (0123456789) of transdisciplinary work. One challenge of integration lies in agreeing upon conceptual frameworks and methods that can be used for TD work by diverse collaborators (i.e., can researchers of diverse disciplinary backgrounds speak a common language?). Another challenge lies in the debate between the importance of context-dependent science (Lam et al. 2021) and the need to improve capacity to make generalizations from case study research (Bennett et al. 2021).
In this Perspective, we use the eLTER RI1 as a case study to analyze how an infrastructure for SES research can effectively support knowledge integration, more effective stakeholder engagement, and a more equitable, context-aware way of doing science. In particular, we describe how the eLTER RI has initiated a process of actualizing its potential for societal impact, not only by rethinking and recalibrating what science is done, but in how it is done. We tell this story-of how a large, transnational ecosystem research network began to internalize emerging knowledge regarding the benefits of inter-and transdisciplinarity, and how it has begun to institutionalize particular TD values into its administrative structure and research program.
We will elaborate on how eLTER has incorporated a new conceptual framework, ethical commitments, assessment protocols, stakeholder engagement approaches, and other measures intended to support TD research toward sustainability transformations. These changes can be seen to constitute "interventions that simultaneously address organizational reform, human-nature interactions, and knowledge productions" (Abson et al. 2017: 9), and therefore offer important lessons for those interested in organizational transformations toward sustainability. We show how program changes constitute the seeds of organizational transformation (Baker-Shelley et al. 2017). In doing so, we highlight the necessity of holistic shifts in thought and action, from changes in organizational policies to changes in the way individuals think, communicate, and conduct research. We hope that sharing this story will provide examples and inspiration for others seeking to change the status quo in social-ecological science at the landscape scale.
We, the authors, have both played roles in the eLTER RI by carrying out an EU-funded audit of LTSER research platforms (2016-2020;Holzer et al. 2018a, b;Holzer et al. 2019). DEO has held a leadership role in preparing the PPP and PLUS grants discussed below, as well as advancing some of the strategic documents outlined in the third section below. We believe that our intimate engagement with the RI and the processes described in this paper enable us to share details that improve the transparency of what is often an "insider" process. In the spirit of TD research, we attempt to be selfaware, introspective, and critical when necessary, in order to provide a candid and effective critique of the described organizational transition.
---
The roots of LTER and early efforts to integrate social ecology
The US LTER network was founded in 1980; its primary focus during its initial years was on ecosystem properties and their biophysical variables (Aronova et al. 2010). If human aspects were integrated into LTER research, it was primarily via studies of the negative impacts of human activities on ecosystem function. In the 1990s, US LTER scientists discussed integrating the social sciences into the network's research program, but tangible activities were not immediately initiated (Redman et al. 2004). In its 2002 review of LTER, the US National Science Foundation (NSF) made 27 recommendations, including an explicit call for LTER to collaborate with social scientists, the establishment of cross-site projects, and a focus on synthesis science (NSF 2002;Redman et al. 2004;Mirtl 2010). Redman and colleagues (2004) therefore suggested a conceptual framework for integrating activities and delineated key interdisciplinary questions for LTER to consider. Their approach advocated addressing questions of societal importance which were associated with long-term ecosystem processes, and which were better studied across a network of sites. Over the next two decades, social dimensions were integrated into two urban LTER sites in the US and elsewhere, ultimately contributing to the development of novel frameworks for transdisciplinary research and the study of urban socio-ecology (Grove & Pickett 2019, 2021).
The NSF, as a key mentor of LTER activities, was intent on making the global LTER network independent of US LTER (Mirtl 2010). An international LTER Vol:. (1234567890) (ILTER) network was established in 1993. Its focus was initially on long-term ecosystem observation, but it grew to engage in "site-based ecological and socioeconomic research," representing a powerful network of ecosystem research facilities comprising 40 national member networks by 2008 (Mirtl 2010).
In 2001, concurrent to the US LTER review, a European Environment Agency (EEA) report critiqued the fragmentation of ecosystem research in Europe and called for stronger links between ecosystem research and monitoring (i.e., Gee 2001, as cited in Mirtl 2010). LTER-Europe, or "eLTER" as it is called today, evolved out of a European-funded Network of Excellence, ALTER-Net. Although eLTER's conceptual research framework embraced an SES approach from its inception, implementing this approach has proven challenging.
Within eLTER, SES research was organized around Long-Term Socio-Ecological Research (LTSER) "platforms". These platforms are geographical areas that typically encompass classic LTER sites, but also include broader geographic regions, thereby integrating key cultural, administrative, historic, economic and other social dimensions. An SES approach was further advanced by a series of publications which advocated for a comprehensive shift from LTER to LTSER, set out the theoretical justification for this shift, developed a blueprint for the physical structure of LTSER platforms, and presented case studies of LTSER platforms (Haberl et al. 2006;Singh et al. 2013). While on-the-ground research and ecosystem observation continued to focus primarily on natural sciences expertise and methodological approaches over the next decade, numerous training workshops and exploratory research projects were conducted to strengthen TD competencies of eLTER scientists.
During the years 2014-2018, with generous funding of a European H2020 grant, eLTER conducted a comprehensive audit of its capacities to conduct SES research and its output. Results, summarized in a trio of articles by the current authors (Holzer et al. 2018a(Holzer et al. , 2018b(Holzer et al. , 2019) ) and others (Gingrich et al. 2016;Dick et al. 2018;Angelstam et al. 2019) yielded the following observations and recommendations, among others:
1. Observation: LTSER platform leaders strive to do transdisciplinary SES research, but platform infrastructure lacks frameworks, capacities, and training to enable effective TD work.
Recommendation: LTSER platforms can begin to address this issue by better leveraging the benefits of network membership, such as harmonized datasets, site access, long-term funding, and planning for a coordinated research agenda with local needs for data, knowledge, and relationship-building with stakeholders.
2. Observation: Platforms were reported to be dominated by ecosystem research (72%), with only 28% social research, and platform research programs were typically maintained by 3-5 staff members (Angelstam et al. 2019).
Recommendation: Strengthen the role of the social sciences and humanities, encourage macroecological approaches, and strengthen stakeholder participation. Increase knowledge exchange, reciprocity, and responsiveness between (a) interdisciplinary scientists, particularly social scientists and natural scientists, and (b) scientists and other stakeholders and interfacing with other landscape approach concepts.
LTSER platforms should start organizational change processes by clearly defining objectives (e.g., through writing and publishing memoranda of understanding), outlining protocols (e.g., for defining the research agenda, for how to engage non-academic stakeholders, etc.), and clearly defining roles for personnel.
These findings and recommendations (Table 1) highlight the LTSER platform as a key infrastructure for conducting SES research in Europe. Further, the findings imply that an SES approach does not just advocate adopting a novel research field or agenda, but also requires an alignment of underlying values (e.g., towards inclusivity and integration of new, and sometimes non-scientific, knowledge sources), commitments towards working with stakeholders and building new partnerships, and an adjustment of research agendas to fit stakeholder needs at the landscape scale. In subsequent iterations of eLTER development, the importance of these aspects for advancing LTSER platforms in Europe were recognized by eLTER coordinators and served as triggers for change; we discuss these transformations below.
Vol.: (0123456789)
---
The path to sustainability: eLTER doubles down on transdisciplinarity
---
Formalizing strategic features of eLTER RI's transdisciplinary research program
Strategic features are elements of a long-term research platform that are "designed and deployed to support multi-sectoral, interdisciplinary, and transdisciplinary collaborations" (Grove & Pickett 2019). They depend upon multiple sectors and disciplines, and are used to create communities, data, and knowledge systems (Grove & Pickett 2019). This section narrates how eLTER has used recent opportunities to formalize strategic features that have been in formation for some time to advance desired outcomes. Throughout its history, ILTER and eLTER have emphasized that their greatest potential contribution to global sustainability is through their research and data (Mirtl et al. 2018 Fig. 1). But these networks also internalized as axiomatic that a sustainability agenda could be best served through the adoption and implementation of a particular way of doing research, i.e., social-ecological, transdisciplinary research. Yet, the audit of eLTER's SES agenda described above revealed a significant gap between its aspirations and its implementation.
In 2018, eLTER took the opportunity of being accepted into the European Strategy Forum on Research Infrastructures (ESFRI) to institute broad and deep changes in its physical, administrative, and scientific structure. ESFRI is an advisory body to the European Union (EU) which seeks to strengthen the science-policy interface through the development of Acceptance to the ESFRI Roadmap favorably positioned eLTER RI to win H2020 grants for two large-scale projects (starting in 2019)-eLTER Preparatory Phase Project (PPP) and eLTER Advanced Community Project (PLUS). Though interacting, these two projects differ substantively in their content, with PPP focusing on establishing the legal, financial, and technical specifications of the RI, and PLUS enabling proof-of-concept research based on the conceptual approach and physical capacities of the RI. To date, proposal-writing and subsequent project implementation has afforded eLTER the opportunity to make another strong push towards fuller integration of TD principles into RI activities, and realization of the underlying values embodied in transdisciplinarity. eLTER has made formal commitments to leveraging science in the service of sustainability through its PPP and PLUS grants, thereby highlighting its potential to contribute to Europe meeting its Sustainable Development Goals (SDGs). The RI's SES research is touted as the primary mechanism to realizing its contribution to sustainability, but, as we demonstrate below, eLTER is building an array of tools and mechanisms into all the details and processes of doing science, through Fig. 1 Major milestones in the evolution of eLTER RI's organizational orientation strengthening its potential for sustainability impact. Information compiled from Redman et al. 2004, Mirtl and Krauze 2007, Aronova et al. 2010, Knapp et al. 2012, Vanderbilt and Gaiser 2017, Mirtl 2018a, 2018b, Mirtl et al. 2018, and Dick et al. 2018. Design: Ronit Cohen-Seffer Vol.: (0123456789) which, it is hoped, it can maximize its contribution to sustainability.
A new vision made real through new documents: strategic plan, ethical frameworks, and formative assessments What follows is an inventory of conceptual frameworks, tools, and mechanisms recently developed as part of the institutionalization of eLTER RI to focus and support the RI's efforts to pursue its objective to help foster societal sustainability transitions. Most of these are noted explicitly in eLTER's newly adopted strategic plan (Nikolaidis et al. 2021), and each is currently being further developed and documented as PPP and PLUS deliverables. It is important to note that these tools and mechanisms for change were developed in a highly iterative process of collective discussion, consultation, debate, and compromise across multiple stakeholder groups (Fig. 2).
---
Grand challenges
In its proposals, strategic plan, and major publications, eLTER designates four global ecological challenges, derived from the EUs 7th Environment Action Programme and other global calls, to frame its research endeavors. They include (1) biodiversity loss; (2) climate change adaptation and mitigation; (3) food security and threats to soil and water; and (4) sustainable management of natural resources. Accordingly, in eLTER PLUS, each one of these themes was assigned an RI scientist to serve as a "theme lead", to assure that RI research, data collection, and community engagement focus on these four themes and integration between them, and to promote knowledge products to the policy community and the public.
---
Whole systems approach
In its far-reaching effort to demonstrate its commitment to interdisciplinarity, eLTER conceptualizes social and biophysical systems into a single SES with multi-directional feedbacks, together with Critical Zone research, which links the disciplines associated with water, air, life, rock, and soil research (Waldron 2020). Accordingly, each of its research, monitoring, and data services is developed within this holistic perspective (Mirtl et al. 2021).
---
Strategic plan
A strategic plan succinctly clarifies the organization's raison d'etat, its goals and objectives, and the paths through which the organization intends to reach those goals. As the organization evolves, a strategic plan serves as a reference point for where the organization would like to go and what it intends to achieve. eLTER's strategic plan (see https:// elter-ri. eu/), adopted in 2021, sets out its institutional vision to use ecosystem science and research in service of environmental sustainability. "Environmental sustainability can only be achieved on the basis of the robust knowledge and empirical evidence needed to identify and mitigate human impacts on ecosystems. eLTER catalyzes scientific discovery and insights through its state-of-the-art research infrastructure, collaborative working culture, and TD expertise. This enables the development and application of evidence-based solutions for the wellbeing of current and future generations" (Nikolaidas et al. 2021). In other words, eLTER is committed to developing a research infrastructure that will contribute to sustainability not only through data and research products, but also via institutional operating procedures (i.e., its values, ethics, and behaviors). All of the elements included in this list are introduced in the strategic plan as organic to the infrastructure's identity and working culture.
---
Ethical framework
Promoting inclusive societies free of discrimination and reducing inequalities are ubiquitous themes across all of the UN's SDGs (e.g., 13 of the 17 goals refer to social equity and inclusiveness; Gupta & Vegelin 2016). Recognizing the tight link between environmental sustainability, on the one hand, and values and ethical conduct, on the other, eLTER has invested in defining a wide-reaching and ambitious ethical framework. In April 2021, eLTER RI published its Gender Equality Program, which included clear commitments to address systemic and pervasive prejudices faced by women in the scientific and academic communities, and outlined a series of actions that eLTER will take to address these issues within the infrastructure and beyond. In particular, it pledges parity in gender representation in decision-making positions and scientific leadership, and implements internal mechanisms to address gender bias in eLTER and to educate towards inclusivity (Orenstein et al. 2021).
In October 2022, eLTER RI published its Ethical Guidelines (Orenstein et al. 2022) which expanded eLTER's commitment to work for greater inclusion and prevent discrimination to include all demographics (e.g., race, nationality, religion, gender identity, and more), as specified in the European Charter of Fundamental Rights (European Union 2012). In order to assure these commitments are fulfilled, a volunteer gender equity and non-discrimination ombudsperson position was created to provide an address for potential complaints, and to oversee education programming for eLTER staff and researchers. The Ethical Guidelines also include sections on (1) research process and conduct, (2) data collection, management and dissemination, and (3) organizational environmental performance, each with performance criteria to assure compliance. Finally, the Ethical Guidelines provide for the creation of an Ethical Advisory Board to assess progress and provide oversight.
Since 2020, eLTER has given explicit public expression to its ethical commitments, including official statements regarding eLTER's potential contribution to assessing COVID-19 impact4 and its condemnation of the 2022 violent invasion of Ukraine.5 eLTER has also issued public announcements and organized events commemorating International Day of Women and Girls in Science, and International Women's Day. 6The [holistic] socio-economic impact assessment TD theory emphasizes self-reflection, both as a means to consider the effectiveness of teamwork and collaboration and the impacts of research activities on outcomes, and also as a mechanism for empowering stakeholders in the process of knowledge co-production (Haberl et al. 2006). Regarding research impacts, eLTER RI will implement a periodic review process in which the impacts of its activities are regularly assessed according to predefined indicators selected to reflect three levels of impact (outputs, outcomes, and long-term impact) for six categories defined in the strategic plan (data services and flow, scientific excellence, stakeholder engagement, cooperation with civil society and private sector actors, training, and conducting societally-relevant SES research). This reflects eLTER's commitment to hold itself both to high standards for scientific, social, and economic impacts, and also to address ESFRI's expectations for RIs accepted onto the Roadmap. Within this assessment framework, indicators will assess transdisciplinarity (e.g., working with stakeholders and greater interdisciplinary engagement), giving RI researchers, administrators, and other stakeholders the tools to determine whether, beyond the documents and good intent, a sustainability transition is actually happening in the RI.
---
Integrated governance and stakeholder engagement
TD work in general, and SES research in particular, is predicated on the assumption that research and its outputs can make a stronger contribution to sustainability when conducted in collaboration with stakeholders (Haberl et al. 2006;Holzer et al. 2019;Biggs et al. 2021). Within eLTER's PPP project, comprehensive stakeholder mapping was conducted in which stakeholder communities were defined at multiple spatial scales (local, national, and European). This was done with the explicit goal of developing participation channels and forums for stakeholders to provide input on eLTER's research agenda, and to tailor data collection and services to the specific needs of stakeholder communities (Fig. 2; Barov et al. 2021). Further, various advisory committees were formed and consulted on eLTER's progress and recommendations from these stakeholders were collected, processed, and integrated into decision making processes. These advisory committees include: the Scientific Advisory Board, the Interim Council, and the Site and Platform Managers Forum, each of which contributed input into the formulation of eLTER policies, which were substantially modified according to this input. Stakeholder engagement at the local and regional scale is a fundamental, explicitly recognized component of LTSER platform operation, eLTER RI's spatial units for SES (Haberl et al. 2006;Orenstein et al. 2019;Grove & Pickett 2019).
---
Institutionalizing socio-ecological platform structure and objectives
While the idea of dedicated landscape-scale platforms (i.e., geographical areas) in which to conduct SES research has been a prominent feature of eLTER for almost two decades (Haberl et al. 2006;Singh et al. 2010;Mirtl et al. 2013), the 2018 assessment of LTSER platforms described above revealed a looselycoordinated set of platforms across Europe conducting an eclectic mix of activities of varying import and impact (Holzer et al. 2018b(Holzer et al. , 2019)). eLTER PPP and PLUS set out to make order in the RI and strengthen the impact and longevity of the LTSER platform structure by developing strict parameters by which a platform is recognized and how it operates. This includes setting standards for program elements such as: (1) which data sets are mandatory or optional, and how these data are collected, stored, and made accessible to stakeholders (eLTER "Standard SES Observation Variables" and its service portfolio for the provision of SES variables and tools for data analysis; Peterseil et al. 2020); (2) staffing platforms with the necessary skill sets for both SES research and stakeholder engagement, and; (3) the necessity of a memorandum of understanding to specify which organizations are responsible for overseeing platform research and operations and which stakeholder groups are partnered with the platform. Importantly, these criteria-which are currently being established in parallel for long-term ecological research (LTER) sites-treat the SES research within eLTER on par with its ecological observations and research program. If the recent push for institutionalizing the LTSER platform succeeds, then SES research within eLTER will no longer be a 'side project', but integral to all of eLTER's operations.
Metzger and colleagues ( 2010) assessed the geographic coverage of LTSER platforms in Europe, identifying a lack of geographical representativeness of LTSER platforms, specifically regarding socioecological systems and urban and disturbed regions. They also identified a persistent bias in favor of traditional ecological research (reconfirmed by Holzer et al. 2019) and noted that Mediterranean and Iberian landscapes received relatively little attention (Metzger et al. 2010). Nearly a decade following that study, Mollenhauer and colleagues (2018) conducted another study to determine the SES coverage Vol:. ( 1234567890) of eLTER RI. While they found that Mediterranean regions continued to be under-represented in the RI, they also concluded that there had been improvement due to "the impact of strategic efforts made by LTER-Europe in recent years to (i) inform and encourage national LTER site network developments to close gaps or (ii) support the development in countries located in underrepresented areas (e.g. eastern Mediterranean area, LTER Greece)" (Mollehnauer et al. 2018. p. 976). Incidentally, in a separate analysis of global distribution of ILTER sites, Wohner and colleagues found over-representation in Mediterranean zones and areas of high economic density (Wohner et al. 2021), a gap identified at the European, though not the global, scale. The RI can and should continue to identify gaps in spatial coverage and encourage filling gaps through establishment of new sites and platforms, or co-locating research sites and platforms with sibling RIs and research initiatives, such as the Programme on Ecosystem Change and Society (PECS),7 Natura and its affiliated organizations8 and UNESCO Biosphere Reserves.9
---
Increasing relevance to diverse stakeholder communities
As of this writing, eLTER RI is only two years into major projects to develop the RI, so measuring the impact of these innovations would be premature. However, innovative tools to strengthen engagement and integration of stakeholders are already in development. For example, eLTER scientists have introduced two online tools that enable interested individuals to access and process (a) socio-economic and demographic statistical data and (b) bio-physical data extracted from satellite sensors specific to LTSER platforms across Europe (called "cookiecutting" tools), thereby facilitating SES research at the platform scale and at the cross-platform, continental scale. Further efforts are being invested in a continental-scale citizen science initiative to track biodiversity across platforms via the online app iNaturalist. These examples illustrate how eLTER is prioritizing engagement in science by stakeholders and the public.
---
Discussion: infrastructure changes signal organizational transformation
We claim here that the eLTER RI provides a real-time example of an organization undergoing a significant change to strengthen its potential contribution to a broader, societal sustainability transition (Fig. 3). Organizational change happens simultaneously at the individual, organizational and extra-organizational levels (Baker-Shelley et al. 2017). This type of significant change requires an array of actions, from cultivating new skills and competencies of members to implementing new practices across portfolios (e.g. research, professional development, operations, governance, communications) in a holistic manner, to applying relevant external standards to the organization (Baker-Shelley et al. 2017). The changes in eLTER RI policies detailed above encourage participating scientists to adapt their outlook and adjust the process of doing science-to be more inclusive, to include non-scientist stakeholders in meaningful and appropriate ways, to think ahead about the intended impacts of research, to engage is periodic self-assessment regarding gaps between activities and objectives, and to plan for ever-present uncertainty and risk.
A clear, empirically-based foundation for supporting these changes is based on diverse bodies of literature, including SES research, organizational change, and more. However, these "upgrades" are not entirely due to the aspiration to fulfill the recommendations of sustainability research; they also fulfill the practical requirements of EU funders (H2020, EU, ESFRI) that set requirements for stakeholder integration, gender equality, transparency, and other issues (see, e.g., "Ethics-H2020 Online Manual",10 which refers to the EU Charter of Fundamental Rights). However, while eLTER leadership may have initiated these aspects of science because they are mandated, they have stated that they envision that these organizational changes will improve research output and actionability toward addressing grand sustainability challenges (e.g., Mirtl et al. 2018;Orenstein et al. 2019;Nikolaidis et al. 2021). eLTER's current efforts reflect a process of formalization of its conceptual framework and ethical Vol.: (0123456789) values. Formalization is important for triggering shifts in how individuals think about TD in their scientific work, for serving as external motivation when old habits creep back in, and for creating boundary objects (e.g., gender equality plan, strategic plan, etc.) that partners and participants can refer to moving forward (Grove & Pickett 2021). Importantly, the commitments described above are "living documents" that will be subjected to periodic review and assessment to facilitate adaptation to changing circumstances. The eLTER RI leadership can attest to the growing pains that come with changing ways of thinking and collaborating; but it is often through a bit of discomfort that individuals realize changes they need to make to become more effective collaborators (Freeth & Caniglia 2020).
---
Challenges of realizing these strategic priorities
The aspirational objectives outlined here, along with the tools to actualize them, does not diminish the importance of reflecting upon present and future challenges. First among the challenges is executing effective long-term, transdisciplinary social-ecological research in the field. The challenge of producing transdisciplinary, policy-relevant research and maintaining it for extended time periods was analyzed by Holzer and colleagues (2019), and the challenges described there, presented here in Table 1, continue to be relevant today. eLTER RI scientists continue to search for and initiate "proof-of-concept" research that not only reflects that TD research is actually happening, but exemplifies the success of the approach in contributing to landscape-scale sustainability. In other words, examples of research that (1) engages stakeholders meaningfully, (2) conducts policy-relevant research, and (3) shows desirable socio-ecological impacts are few, but eLTER is taking steps to grow in this area. One productive working example that is exceptional within the RI is research in the French Zone Atelier Plaine et Val de Sévre, which exemplifies all three of these elements (e.g., Bretagnolle et al. 2019;Gaba and Bretagnolle 2020;Berthet et al. 2022). As outlined in Berthet et al. (2022), platform scientists from Zone Atelier Plaine et Val de Sévre have been working closely with farmers for nearly three decades, during which their research program has become increasing holistic, evolving from a purely ecological perspective to one that also focuses on socio-ecosystem dynamics. This research group has expanded its focus on a broadening range of agro-ecological and biological conservation issues, and involved a wider range of scientists and stakeholders. Reproducing this success across eLTER RI would profoundly advance A second challenge is realizing eLTER's formalized ethical commitments. For example, the RI has been engaged in intense debate regarding the environmental impact of flying, but after cessation of travel for two years during the COVID 19 pandemic, there has been a strong desire to re-establish close working relationships that distanced during the pandemic. Compromises are being tested to reduce the amount of flying; for instance, by holding regional meetings with smaller work teams and setting centrally-located meeting venues with easy access by ground-based transportation. Nonetheless, flying is still considered an absolute necessity by many RI scientists. The first meat-free meeting was also not received positively by many of the workshop participants when the approach was tested, but the RI continues to reduce the amount of meat (and disposable dishes) at its meetings. For most of the other ethical commitments, more time will be needed to see if the guidelines and the tools for their implementation will be successful.
---
Conclusion: will greater transdisciplinarity lead to greater sustainability?
We recognize that implementing the ambitious, holistic approach outlined in this Perspective is beyond the capacity of most individual scholars. It necessitates a collaborative network with effective communication. Conducting effective team science is an ever-present challenge for eLTER, as it is for TD science in general. This challenge encompasses: cultivating teams with a common language and accessible boundary objects to communicate effectively, constant vigilance, renewal, review and self-assessment, introspection, frequent reminders that we are meant to do things differently, and guidelines and leadership to integrate knowledge when individuals begin to revert back to their individual expertise. The eLTER experience shows that significant opportunities to advance these efforts arise when they are explicitly required by granting and government agencies.
While only three years have passed since the 2019 assessment findings were shared, eLTER leadership has institutionalized an SES research approach by defining criteria for platform establishment and operation, defining essential SES variables, and developing tools to compile and disseminate data freely and easily. They have begun to document commitments to TD principles, including a strategic plan and eLTER ethical guidelines. These changes magnify the potential of eLTER to contribute to sustainability goals, through its institutional structure and ethical commitments, through the production of knowledge and data, and through partnerships with stakeholders that can magnify this work.
It is too early to tell whether these organizational changes will directly spur sustainability transitions in LTSER platforms, but eLTER as a network of scientists and stakeholders, scientific infrastructure, and knowledge production is already a more resilient institution (see, e.g., definition of institutional resilience in Steinberg 2009) as a result of the due diligence, reflexivity, and tough conversations that are shifting the status quo of doing SES science across Europe.
images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
---
Author contributions JMH and DEO: both contributed to the conceptualization, writing, editing, and reviewing of this paper.
Funding The authors have not disclosed any funding.
---
Declarations
---
Competing interests
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Alcohol displays on Facebook are ever-present and can be socially desirable for college students. As problematic drinking is a concern for college students, this research sought to understand how different types of information on a Facebook page influence likelihood to drink. Telephone interviews were conducted with 338 incoming college freshmen from two large national universities. Data were obtained from a vignette prompt which presented a scenario in which a senior college student's Facebook profile displayed wall-posts, pictures, and status updates that were drinking-related or pro-social in nature. Participants were asked to report intention to drink alcohol with that student if together at a party. Findings supported the hypotheses: wall-posts were most influential (the stickiest), followed by pictures, followed by status updates. Findings provide additional empirical support for established online impression formation patterns, and additionally provide evidence that virtual cues are being ingrained as schema in interpersonal communication. These results are discussed in relation to the conception of "sticky cues" in impression formation. The way that individuals meet is fundamentally changing. While the forming of an acquaintance and initiation of a relationship was once characterized by a process of gradual disclosure (Altman & Taylor, 1973), today it is possible to rapidly access an avalanche of information about acquaintances. Whether this occurs as online daters Google each other for additional information (Gibbs, Ellison, & Lai, 2011) or as adolescents explore a new friend's social network postings (Courtois, All, & Vanwynsberghe, 2010), personal information is readily available. After a few clicks on the computer, one can form impressions of an individual based off of everything from their vacation pictures to what others say to them on their social networking profile. One population for whom online impressions likely play a significant role in the friendship formation process is that of college students. Among college students, the use of online social networking sties (SNSs) has become ubiquitous with up to 99.5% of students | Online Impression Formation & Sticky Cues
This notion, that some information is weighted differently than other information, is not necessarily new. A number of fields have long considered what information matters in judging another. From warmth and competence (Fiske, Cuddy, & Glick, 2007), to beauty (Eagly, Ashmore, Makhijani, & Long, 1991), to negativity and extremity (Skowronski & Carlston, 1989), scholars have long been interested the basic idea of meaningful cues. This particular research seeks to advance this tradition in its newest iteration: impression formation online.
One understanding for how judgments arise from online impressions comes from social information processing theory (SIPT) (Walther, 1992). The SIPT framework suggests that when forming an impression of another via an online profile, the perceiver utilizes all the available information, whether it is in the form of pictures or text, and each bit of information is referred to as a cue. Moreover, some cues matter more than others. Cues that are "especially informative about a message source's credibility" have been deemed "sticky cues" (Van Der Heide & Schumaker, 2013), for they grab the attention of a perceiver in the impression formation process and influence judgment more so than other cues.
Previous research has already indicated the presence cues that might be deemed "sticky" in online profiles. First, research has supported visual primacy in online impressions (Van Der Heid et al., 2012). When photographic and textual self-disclosures are presented together on a Facebook profile, photographs more strongly influenced social orientation judgments. Photographs are seen as more credible, and weighed heavier in impression formation.
In applying patterns of visual primacy and online impression formation to a population of incoming college students there are some distinct predictions possible. We hypothesize that when a college freshman is considering whether to drink with a new friend, the new friend's Facebook photos will receive stronger consideration, and more extreme responses, than any textual self-disclosures. H1: For incoming college students, considering pictures on a new friend's Facebook profile will lead to more extreme responses on likelihood to drink intentions, whether positive or negative, compared to status updates.
Another form of information that may represent a sticky cue is a wall post. A wall-post is a textual statement written by another individual that appears on one's own profile page. Research has supported the warranting theory online (Walther et al., 2009;Van Der Heide, Johnson, & Vang, 2013), which predicts that other-generated cues have a greater impression weight than user-generated cues. It follows that a wall-post on Facebook would carry greater value than a personal picture. This occurs as a photo angle can be selectively chosen and posted oneself, whereas it might take more effort to convince friends to complement one's appearance on a Facebook wall.
Applying warranting theory to a context of first year collegiate students is again insightful. If a first year student were judging a new friend, and considering how to act with that new friend based on viewing their Facebook profile, it is likely that the strongest judgments would arise from reactions to what that individual has contained on their wall -the clearest other-generated cue. Thus, a second hypothesis can be posited: H2: For incoming college students, wall-posts on a new friend's Facebook profile will produce more extreme responses on likelihood to drink intentions, whether positive or negative, compared to pictures.
---
Positive and Negative Cues
Finally, research has indicated that the online context in which personality information is presented can affect how it is interpreted (D 'Angelo & Van Der Heide, 2013). Specifically, D 'Angelo and Van Der Heide (2013) identified positivity and negativity effects, (driven by the nonnormativity effect), which can arise in impression formation. Simply put, cues that are not expected in a given online context will be judged in respect to their valence in that online context.
The positivity and negativity effect allow for certain predictions about personality judgments in an online environment (D 'Angelo & Van Der Heide, 2013;Carr & Walther, 2014). For example, it is likely that individuals about to begin their freshman year of college will not have been exposed to many drinking references from same-age friends on Facebook, as alcohol references tend to emerge during the first year of college (Moreno et al., 2014). Additionally, older adolescents today know the public nature of anything they post online (Lewis, Kaufman & Christakis, 2008), and such postings are taboo among this population for whom drinking is illegal. Thus, any postings of reference to drinking would likely be frowned upon, coming off as negative. While the negativity effect may emerge from drinking references, there is also the possibility of a positivity effect emerging from prosocial messages. Here, pro-social is defined as any type of cue that indicates a desire to better one's self or others. For example, discussion of a study group may occur on an individual's wall, and seeing such uncommon pro-social cues on a new friend's Facebook profile may produce a positive judgment of that individual's character.
Predictions of these positive and negative responses are strengthened when considering processes of adolescent development. Specifically, there is a well-documented concept associated with adolescent development: resiliency (see Luthar, Cicchetti, & Becker, 2000 for review). Resilience has been deemed an ordinary phenomenon whereby adolescents, even when faced with threats, will achieve positive outcomes (Masten, 2001).
Combining what is known about negativity and positivity effects, with the notion of resilience, certain predictions can be made. Given that drinking itself is a rather normative behavior on many college campuses (Weschler, Lee, Juo, & Lee, 2000) and expected aspect of college, it is likely that even underage students will participate. However, at the same time, the notion of resilience suggests that these individuals would be more likely to place themselves in a safe situation with a responsible individual to partake in alcohol, than with an individual who they may deem as irresponsible, unsafe, or unintelligent. Thus, the following hypothesis can be stated: H3: For incoming college students, pro-social Facebook cues on a new friend's profile will lead to higher likelihood to drink intentions compared to drinking related Facebook cues.
---
Internalizing Social Networks: Cues as Schema
Understanding how individuals' view and form impressions of online profiles is an important task. However, online profiles are no longer just online -they are often discussed in person, over a conversational audio channel. SIPT (Walther, 1992), the theory upon which the notion of sticky cues and the nonnormativity effect are built, is a theory about visual processing. However, Facebook, wall-posts, and status updates are common conversational topics for collegiate students and others. ("Did you see what he posted on her wall?!") Thus, the final aspect of this research attends to the movement of cues away from being simply screen-based structures.
Importantly, in verbal conversation these structures of online communication lack the visual aspects that allow a perceiver to clearly judge wall-posts as other-generated and perhaps more credible than pictures or textual self-statements. However, it is possible that certain features of online communication have become ingrained in communicators to the degree that that they are automatically interpreted as being more or less important, regardless of visual cues. If this is the case, it is possible that credibility of certain cues are so familiar to college aged students that they may be considered schema: a personal mental framework that allows individuals to process information in an effective and automatic manner (Anderson, 1990). A schema is generally believed to have some level of activation, which also triggers other related schema, thus acting as a cognitive shortcut, similar to a heuristic. If wall-posts and pictures are schema, the associated heuristic might be a signal of credibility or importance for judging an individual. Hence, this research aims to test the stickiness of cues through an auditory channel, thus entertaining the notion of Facebook cues as schema.
---
Method Participants
This study took place at two large public universities, one Western and one located in the Midwest. Students were randomly selected from the registrar's lists of incoming freshmen students from both universities and deemed eligible if they were between the ages of 18 and 19 years and enrolled as full time freshmen for fall 2011 at one of these two universities. This sample focused on incoming freshman for two reasons. First, this research represents the first stages of a longitudinal study involving these students. Second, the measures taken at this time (prior to freshman year) were deemed an important line of investigation given the likely position of Facebook in the friendship formation and uncertainty reduction processes during the first year of college. A total of 338 participants were interviewed, 190 (56%) of whom were Female and 148 (44%) of whom were male, with 198 (58%) from the Midwestern University and 139 (41%) from the Western University. Of these participants 252 (75%) identified as Caucasian, 39 (12%) as Asian, 21 (6%) as more than one ethnicity, 13 (4%) as Hispanic, and 5 (1%) as African American.
Given that the aim of this study involved predicting likelihood to drink, nondrinkers were screened out of analyses. There were a total of 50 non-drinkers removed from this sample, leaving 288 total participants considered in the analyses.
---
Interviews
After providing consent, all participants completed a phone interview. Interviews were conducted by trained research assistants and lasted between 40 and 60 minutes on average. During interviews data were recorded onto a collection spreadsheet. Interviews included a vignette.
---
Vignette
To assess the influence of Facebook cues on intention to use alcohol a vignette was utilized. Vignettes, systematically elaborated descriptions of concrete situations, are a valid and comprehensive method for assessing exploring people's perceptions, beliefs and meanings about specific situations (Alexander & Becker, 1978;Barter & Renold, 1999;Dresselhaus, Peabody, Luck, & Bertenthal, 2004;Peabody, Luck, Glassman, Dresselhaus, & Lee, 2000;Peabody, Luck, Glassmen, Hanaen, Spell, & Lee, 2004;Young, Dilworth, & Mott;2011). Vignettes allow for the manipulation of important study variables in a manner that would not be feasible in an observational study, as well as the collection of information from a large number of participants simultaneously. Further, vignettes allow for avoidance of observer effect and ethical dilemmas, and the control of confounding effects (Dresselhaus et al., 2004;Gould, 1996;Peabody et al., 2000;Spalding & Phillips, 2007).
The vignette was designed to assess participant views of how displayed Facebook content would impact their intention to drink in a particular setting. To develop alcohol references, past databases of coded Facebook information were reviewed and example references to alcohol use were noted. To develop non-alcohol related posts, active Facebook profiles were reviewed to identify references that were non-alcohol related. For the purposes of consistency within this study, these prompts were intended to have a pro-social tone. These vignettes were then tested on a pilot sample of participants and edited in response to comments.
These vignettes presented a scenario in which the participant (who was to enter college within three months of the interview date) was invited to a party by a senior student that they just met. The student was then asked to imagine that they were currently looking at the senior student's Facebook profile, prior to going out to that party tonight. Then, the participants were presented with a number of different cues that they might see on this individual's profile, and asked to respond with their likelihood to drink with this individual at the party after viewing each cue. Cues varied in type (wall-post, status update, picture) and valence (pro-social vs drinking-related) with dummy prompts of interests and groups also included. See Appendix A for full vignette prompt.
---
Results
Taken together, the Hypotheses 1 through 3 predict a distinct pattern of means. Specifically, when assessed with response on likelihood to drink, the cue pattern was predicted to move from least likelihood to greatest likelihood as follows: drinking-related wall post, drinkingrelated picture, drinking-status update, pro-social status update, pro-social picture, pro-social wall post. In order to assess the predicted pattern of differences, a repeated measures ANOVA was conducted and the data was tested for a linear trend, while controlling for gender, ethnicity, and university. As predicted, there was a significant effect of cue type on likelihood to drink, F(5, 1390) =5.915, p < .01, Partial η 2 = .02. Additionally, there was a significant linear trend F(1, 278) = 7.79, p < .01, Partial η 2 = .03, indicating that likelihood to drink in response to cues increased proportionately, as predicted by Hypotheses 1 through 3. However, while the hypotheses received statistical support, the effect sizes were rather small. Figure 1 illustrates the pattern of means.
As is evident, the pattern of means falls in the manner that was predicted by the hypotheses. Specifically, Hypothesis 1 (visual primacy) was supported with pictures receiving more extreme ratings than status updates, and Hypothesis 2 (warranting) was supported with the wall-posts receiving the most extreme ratings. Additionally, pro-social cues lead to an overall higher likelihood of drinking, with drinking cues leading to a lower likelihood of drinking, thus supporting Hypothesis 3 (positivity and negativity effects, resilience).
Given these results, specific differences among cue types were tested. In order to test these predictions, first score differences were computed. This was done to access the strength of response to particular cue types. For example, wall-posts with drinking related references lead to the least likely intention to drink (M = 1.65), whereas wall-posts with pro-social references lead to the highest intention (M = 2.60), leaving wall posts with the greatest computed difference (M = .95). Thus, wall-posts led to the most extreme reactions, compared to the range of reactions from pictures, and status updates. Using these difference scores, Bonferroni corrected paired t-tests were conducted, as suggested by Field (2009), to compare levels of difference scores as the independent variable. As predicted, likelihood changes based on pictures were more extreme than the difference between status updates, and likelihood changes based on wall posts were more extreme than the difference between pictures, thus supporting H1 and H2 respectively. See Table 1 for means and t-test results.
---
Discussion
While college students readily consume both Facebook (Alemán &Wartman, 2011) andalcohol (Weschler et al., 2000), little is known about how one activity may affect the other, how the impression formation process impacts intended behavior. Moreover, the impact of this process may be taking place both through visual channels when individuals interact with computer screens, and through auditory channels as discussion of Facebook is becoming as staple of face-to-face conversation. Thus, this research presented a vignette to examine how cues presented on Facebook, which were either alcohol-related or pro-social, impacted college students' self-reported intentions to drink alcohol. Findings indicated that some cues do matter more; some cues were stickier than others. Additionally, findings indicated that viewing pro-social cues led to a higher self-reported likelihood to drink with a target individual for first year college students. Finally, this research found that these predicted patterns emerge when being tested over an auditory channel, suggesting that sticky cues are not only an element of visual perception and impression formation, but rather are making their way into the very schema of interpersonal communication.
With such findings, this research provides further empirical evidence of visual primacy (Van Der Heide et al., 2012), and the warranting effect (Walther et al., 2009), in the impression formation process. However, it does so by enhancing the conceptualization of "sticky cues" (Van Der Heide & Schumaker, 2013). According to these findings, wall-posts are the stickiest cues associated with Facebook page, followed by photographs, followed by textual self-disclosures.
The second contribution of this research comes in the more practical findings concerning the interaction of Facebook and collegiate drinking. We found that pro-social cues were associated with increased reported intention to drink compared to alcohol-related cues. This pattern may be predicted by the negativity and positivity effect (D 'Angelo & Van Der Heide, 2013) associated with online impression formation, as drinking related references are likely seen as negative for this population which is likely aware of online privacy and legality issues (Ellison, Vitak, Steinfield, Gray, & Lampe, 2011). Additionally, the resiliency framework (Luthar, 2000) for adolescent development suggests that older adolescents will not knowingly put themselves into a harmful situation, rather seeking a responsible route if given the time and right decision making context, suggesting a prosocial post might be judged as positive for the context of Facebook, and acted on as such.
Finally, this research takes a theoretical leap in consideration of the importance of cues online. Many processes identified in CMC find their foundation in communication processes established first in face-to-face communication: the non-normativity effect (D 'Angelo & Van Der Heide, 2013) was drawn from correspondence inference theory (Jones &Davis, 1965), andvisual primacy (Van Der Heide et al., 2012) emerges from work evaluating visual and verbal cues (Argyle, Alkema, & Gilmour, 1972;Mehrabian & Wiener, 1967). Patterns emerged in face to face communicative action, and were found to exist in computer-mediated communication as well. Now, in this study we see that this may hold true in the opposite direction. The findings from this study support impression formation predictions that do not end when the phone or computer are turned off. Cues that are important online may be internalized and have impact even when no screen is present. A wall-post may have started as an impression formation based cue, but its importance is even clear when discussed via face-to-face communication. Thus, this research both indicates which cues are stickiest in impression formation, and supports that sticky cues exist not only on the webpage, but rather as learned schema within the mind.
---
Limitations and Future Directions
This study has limitations which must be considered. First, the vignette methodology, though informative as a step forward in research, has a number of incorporated considerations. A possible source of error includes the fact that each participant was exposed to a single questionnaire with all cues. While the cues were not presented in the order of means hypothesized, it is possible that there were order effects. Further, there was no control variable assessed within this vignette. While it is clear pro-social cues lead to a higher likelihood to drink than drinking related cues, it is unknown how these cues relate to baseline intentions. Thus, future research should seek to establish these effects in a randomized experiment between subjects, and also assess how these cues relate to baseline drinking likelihoods. Additionally, this vignette did not specify the gender of the senior college student. It is possible that individuals might have interpreted this senior student as same-sex or opposite sex, and responded to the prompt differently because of this assumption. Future research should explore the impact of gender in association with college alcohol use and influence on Facebook.
A second limitation comes in the population sample. The sample was taken specifically of incoming freshman students from two universities; the gender and demographic characteristics of the sample are consistent with those of the larger universities, suggesting the population was generalizable to larger schools. However, the generalizability of the results cannot extend beyond this population. Thus, future research should examine different types of school settings, and whether these patterns of cue influence extend over time, especially in a manner that addresses the evolving norms of social and networked life that a college student can experience. Additionally, it is questionable whether these patterns exist in populations with different drinking and posting norms.
A third limitation comes in in the consideration of the evidence for warranting theory over visual primacy. While this finding has theoretical and empirical support within this paper, a stronger manipulation would involve and actual simulation administered via screen. This would allow the hypothesized mechanisms behind these theories to occur naturally. Thus, further research is needed before this particular hierarchy can be established with strong evidence.
A fourth limitation arises from the consideration of alcohol consumption likelihoods. Our study did not assess intended or actual drinking frequency or quantity. Future research on Facebook cue influence in this domain should consider both participant drinking behaviors and participant drinking intentions in a more specific manner. While it is clear future work is necessary to further understand the influence of Facebook cues on collegiate alcohol consumption, the findings of this research represent a strong first step to understanding the visceral impact of a virtual environment.
---
Mean difference t-tests
---
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. |
Background. Lesbian, bisexual, or gay individuals (LBGs) have an increased risk for mental health problems compared to heterosexuals, but this association has sparsely been investigated for psychotic disorders. The aim of this study was: (1) to examine whether LBG sexual orientation is more prevalent in individuals with a non-affective psychotic disorder (NAPD) than in people without a psychotic disorder; and if so, (2) to explore possible mediating pathways. Methods. Sexual orientation was assessed in the 6-year follow-up assessment of the Dutch Genetic Risk and Outcome of Psychosis study (GROUP), a case-control study with 1547 participants (582 patients with psychotic disorder, 604 siblings, and 361 controls). Binary logistic regression analyses were used to calculate the risk of patients with a psychotic disorder being LBG, compared to siblings and controls. Perceived discrimination, history of bullying, childhood trauma (CT), and sexual identity disclosure were investigated as potential mediating variables. Results. The proportion of individuals with LBG orientation was 6.8% in patients (n = 40), 4.3% in siblings (n = 26), and 2.5% in controls (n = 10). The age-and gender-adjusted odds ratio of LBG for patients was 1.57 (95% CI 1.08-2.27; p = 0.019), compared to siblings and controls. Discrimination, bullying, and CT all partially mediated this association. Conclusions. Adverse social experiences related to sexual minority status may increase the risk for NAPD. Sexual identity, behavior, and difficulties need more attention in everyday clinical practice. | Introduction
During the late 1950s, when homosexuality was still viewed as a psychiatric disorder, nonclinical population-based studies in the visible lesbian, bisexual, or gay individual (LBG) community repeatedly found no elevation of the natural occurrence of mental disorders in LBGs compared to heterosexual (HTS) people (Cochran & Mays, 2000). Since the early 1990s however, research with improved study designs and less selective inclusion of LBG individuals reported increased rates of mental disorders in LBGs compared to HTSs. A meta-analysis of 25 studies calculated odds ratios (ORs) of 1.5 for depression, anxiety, and substance abuse disorders, and a twofold excess in suicide attempts (King et al., 2008). In a large majority of studies, however, psychosis was not investigated as a mental health outcome. Sexual minority status has been associated with a higher prevalence of psychotic symptoms in general population studies in the UK and the Netherlands (Chakraborty, McManus, Brugha, Bebbington, & King, 2011;Gevonden et al., 2014). To the best of our knowledge, these are the only two studies comparing risk for psychotic disorders and psychotic symptoms between LBGs and HTSs, respectively. The current study aimed to investigate the association between LBG status and the risk for psychotic disorders, and to explore potential pathways.
Social adversity and social stress over the life course may be a substantial mediator of psychological problems and mental illness in LBGs. Social stress occurs when the social self is threatened due to maltreatment, stigmatization, discrimination, or exclusion (Meyer, 2003). Such social-evaluative threats are more likely to occur for those belonging to ethnic (Veling, 2013) and sexual (Kuyper & Fokkema, 2011) minority groups and may increase the risk for psychiatric disorders. The prevalence of childhood sexual and physical abuse is up to four times more likely to occur in LBGs (Corliss, Cochran, & Mays, 2002). Gay boys are 4.6 times, and lesbian girls are 2.4 times more likely to be bullied during high school compared to HTS adolescents (Goodenow, Watson, Adjei, Homma, & Saewyc, 2016). There is tentative evidence for a dose-response relationship between victimization through bullying and mental health problems (Bontempo & D'Augelli, 2002). Moreover, childhood bullying is specifically thought by some to influence cognitive and biological mechanisms of psychotic ideation in those at-risk mental states in early adolescence (Lataster et al., 2006).
To our knowledge, associations between sexual minority status and psychotic disorders have not been studied (see Fig. 1). A fair amount, however, has been published on the socially adverse environmental risk factors for non-affective psychotic disorder (NAPD). The association between childhood trauma (CT) and psychosis has been quantified to a substantial OR of 2.8 (van Nierop et al., 2014). Childhood bullying increases the risk for psychotic mental disease (Bebbington et al., 2004). Lastly, perceived discrimination too has been associated with an increased risk of psychotic symptoms in clinical minority studies (Pearce, Rafiq, Simpson, & Varese, 2019). Within LBG populations, the degree of perceived discrimination by means of sexual prejudice has been associated with mental health problems (Goodenow et al., 2016).
Factors of social adversity thought to mediate associations between LBG and psychosis are shown in Fig. 1. A previous crosssectional study (Gevonden et al., 2014) found that perceived discrimination, in particular, mediated the twofold increased psychotic symptom development in a community sample of LBGs.
The current study investigated the prevalence of LBG in a large population-based cohort of patients with psychotic disorders, their siblings, and healthy controls. We aimed: (1) to examine whether the proportion of LBGs is higher in patients with psychotic disorders compared to individuals without psychotic disorder; and if so, (2) to explore possible mediating pathways. We hypothesized: (a) that sexual minority status is more common in patients than in siblings and healthy controls, (b) that patients less often disclose their sexual identity to others, and (c) that CT, experiences of bullying, and perceived discrimination contribute to an increased risk for NAPD.
---
Methods
Data were collected from the Genetic Risk and Outcome in Psychosis Study (GROUP) (Korver, Quee, Boos, Simons, & de Haan, 2012), a large longitudinal observational population-based cohort study, conducted in Dutch mental health institutes affiliated with four academic medical centers in the Netherlands (Amsterdam, Groningen, Maastricht, Utrecht) and in regional psychotic disorder services in Belgium. The procedure of recruitment and population characteristics has been described in detail elsewhere (Korver et al., 2012). The GROUP-study was approved by the Medical Ethics Committee of the Academic Medical Center of Utrecht. All subjects gave written informed consent. The current study uses data from the third GROUP assessment, 6 years after baseline (data assessment 2011-2014).
---
Subjects
Patients were asked to participate in the GROUP study if they met the following inclusion criteria: (i) age range 16-50 years, (ii) diagnosis of (recent) NAPD, and (iii) good command of the Dutch language. Control subjects were selected through a system of random mailings to addresses in corresponding geographical areas. Controls were excluded if they had a first-degree relative with a psychotic disorder, established with the Family Interview for Genetic Studies. Siblings of included patients were also approached to take part in the GROUP study, if they did not have a history of psychotic disorder. If controls or relatives developed a psychosis during the study period, they were allocated to the patient group.
---
Measurements
---
Diagnostic instruments
Detailed medical and psychiatric histories were collected, including the Comprehensive Assessment of Symptoms and History (CASH), a semi-structured interview for assessing diagnosis and psychopathology (Andreasen, Flaum, & Arndt, 1992); or Schedules for Clinical Assessment for Neuropsychiatry (SCAN 2.1) (Wing et al., 1990). Trained psychologists or psychiatrist with extensive clinical experience using the Diagnostic and Statistical Manual of Mental Disorders-IV (DSM-IV) criteria (APA, 2000) made diagnostic classification(s).
---
Sexual orientation and behavior
Homosexuality has several dimensions, including self-identification, same-sex attraction, and same-sex behavior. In order to capture the best dimension of sexual orientation, participants were asked if their predominant orientation was same-sex (response options 'yes', 'no', 'I don't know', and 'refuse to answer'). Participants were classified as LBG if they replied 'yes'. Missing data for sexual minority status, i.e. 'I don't know' or 'refuse to answer', were recoded as 'no' and assigned subjects to the HTS group. In a sensitivity analysis, we Psychological Medicine recoded 'I don't know' and 'refuse to answer' into 'yes' in order to compare both results. All participants were also asked to what extent they had disclosed their sexual orientation to people in their environment. Sexual identity disclosure can be seen as weakening effect modifier of (minority) stress (Kuyper & Fokkema, 2011). The latter was illustrated by findings of lower cortisol levels and less psychiatric symptoms in adult LGBs who had disclosed their sexual identity compared to those who had not (Juster, Smith, Ouellet, Sindi, & Lupien, 2013). Disclosure is also associated with affiliation and formation of social circuits, which are likely to reduce the impact of social stress (Meyer, 2003).
---
Socio-demographic variables
Socio-demographic variables included age, gender, ethnicity (% of Caucasian participants), living with a partner, education (% highest degree obtained), urban living (see Table 1), and lifetime cannabis use (% of participants that ever used cannabis during their lifetime 'yes/no').
---
Social adversity and social stress CT was assessed with the Dutch version of the Childhood Trauma
Questionnaire-Short Form (CTQ-SF). The Dutch CTQ-SF effectively screens for maltreatment between clinical and non-clinical samples (Thombs, Bernstein, Lobbestael, & Arntz, 2009). The CTQ-SF is a 25-item retrospective self-report questionnaire designed to assess five dimensions of childhood maltreatment:
(1) Physical Abuse, (2) Emotional Abuse, (3) Sexual Abuse, (4) Physical Neglect, and (5) Emotional Neglect. The total mean score of all child trauma experiences was used for analysis.
Bullying was assessed as follows: participants were asked if they had ever been bullied by another child or teenager during elementary, middle-, or high school and asked to rate the severity of bullying on a five-point scale (from never = 1 to often = 5). Lifetime discrimination experiences were assessed with a series of dichotomous 'yes' or 'no' questions on the following situations: ever been fired, not hired for a job, not been promoted, detained, questioned or threatened by police, badly treated by the justice system, discouraged from further education, prevented to buy/let a house, badly treated by neighbors, denied a loan/mortgage, received bad service, or been badly treated in either medical care or public transport. The mean cumulative score was used as a measure of perceived discrimination. In contrast to CT, which was assessed at wave 2, bullying, discrimination, and sexual minority were all assessed at wave 3.
---
Statistical analysis
Statistical analysis was performed using SPSS 17.0. Pearson χ 2 test of independence, independent samples t test, and ANOVA (oneway) were used to test socio-demographic and clinical differences between patients and controls, and between LBG and HTS groups.
Binary logistic regression analyses were used to compare the risk (expressed as OR) of patients with a psychotic disorder being LBG, compared to people without a psychotic disorder. A priori determined confounding variables of age and gender were adjusted for.
To investigate whether CT, bullying, and perceived discrimination mediated the association between LBG and NAPD, a bootstrapped multiple mediation analysis was conducted with the Process macro developed by Hayes (2012). Release 6.0 of the GROUP database was used for analyses. Controls were significantly more likely to be living/married with someone than patients (OR 3.2, 95% CI 2.7-3.8). The proportion of people with high education was lower in patients than in controls. LBG patients had significantly higher mean scores for CT when compared to HTS patients. In the LBG group, 39.5% (n = 15) of patients reported often bullying v. 20% (n = 2) in the control group. In HTS participants, this was 20.3% and 8.1%, respectively. Discrimination scores were also significantly higher in the LBG participants, with 29% of patients and 50% of controls reported never to have experienced discrimination v. 39% and 62% in HTS counterparts.
Data of all 1546 subjects were used to calculate binary regression estimates (see Table 2). Compared to controls, the OR of LBG status was 1.61 for patients with NAPD (95% CI 1.13-2.29-4.92, p = 0.008) and for siblings was 1.58 (95% CI 0.75-3.32, p = 0.225). Of the LBG participants: 78% of controls, 38% of siblings, and 29% of patients had disclosed their sexual orientation to almost everyone in their lives and not a single control, 4% of siblings, and 16% of patients reported that no one knew of their sexual orientation.
Multiple mediation analysis showed (see Table 3) that perceived discrimination, CT, and bullying all partially mediated the association between LBG status and NAPD. The indirect effect of discrimination, controlling for the other mediators, was the greatest in predicting psychosis in LBGs, B = 0.23 (bootstrapped 95% CI 0.07-0.44). The second greatest indirect effect was that of CT, B = 0.12 (bootstrapped 95% CI 0.04-0.25) and the last was bullying, B = 0.06 (95% CI 0.002-0.15).
In sensitivity analyses with participants who responded 'I do not know' or 'refuse to answer' added to the LBG category, 66 patients (11.3%), 36 siblings (6.0%), and 23 controls (6.4%) were classified as LBG. The unadjusted OR for LBG status was 1.37 (95% CI 1.07-1.76; p = 0.012) for patients compared to controls. Age-and gender-adjusted OR was 1.42 (95% CI 1.09-1.85; p = 0.009).
---
Discussion
---
Main findings
This large population-based case-control study found that the prevalence of a sexual minority status was higher in patients with NAPD (6.8%) than in siblings (4.3%) and in healthy controls (2.8%). Whereas approximately 80% of controls had disclosed their sexual identity to almost everyone in their lives, only 30% of patients had done so. Our study results provide preliminary evidence that sexual minority status is a risk factor for NAPD with a positive significant association OR of 1.6. Mean scores of social adversity, with the exception of CT, were significantly higher in LBGs than in HTSs, also within the patient group. CT, a history of bullying, and perceived discrimination partially mediated the association between sexual minority status and NAPD.
Comparison to previous studies and interpretation Chakraborty et al. 2011 found elevated rates of psychotic disorders in non-HTS individuals, 3.75 (1.76-8.00) unadjusted OR (95% CI). Similarly, in the Netherlands, in another general population study, Gevonden et al. (2014) found elevated rates of psychotic symptoms in LBG population compared with HTS during two consecutive periods: NEMESIS-1 (OR 2.56, 95% CI 1.71-3.84) and NEMESIS-2 (OR 2.30, 95% CI 1.42-3.71). In the current clinical sample, an OR of LBG status of 1.6 for patients with NAPD (95% CI 1.13-2.29, p = 0.008). Correspondingly, perceived discrimination outcomes were higher for LBGs in both of the aforementioned studies and thought to act as a social stressor (or threat) toward the genesis of psychopathology. Our mediation results confirm these reports by finding similar factors mediating the association between LBG status and NAPD specifically. Our results are also consistent with previous health mediation risk findings (Bontempo & D'Augelli, 2002) of data from 9188 9th-12th grade students from Massachusetts and Vermont; of whom 315 were LBGs. They showed a combined effect of sexual minority status and (high) victimization to be consistently associated with higher levels of risk indices such as substance use or suicide attempts. In our data, bullying experiences were more prevalent amongst LBG than HTS subjects, and the indirect effect
---
Psychological Medicine
of bullying on NAPD risk was significant. Compared to CT and discrimination, however, the effect of bullying was smaller. The reason for this may be that in the aforementioned study, bullying was ascertained in real time; while our study participants (mean age at current assessment 32) were older and re-call error could have led to an underreporting of bullying. Furthermore, 16% of our sexual minority patients had not disclosed their sexual identity, which also may have contributed to lower bullying scores. Sexual minority status is likely to represent environmental factors that increase the risk for psychotic symptoms and disorders. Current environmental theories of psychosis emphasize a central role for adversive experiences over the life course. Childhood adversities, in particular recurrent experiences of hostility and threat, have been consistently associated with increased risk for psychotic disorder (Morgan & Gayer-Anderson, 2016). Similarly, higher rates of psychosis in immigrants and their offspring are likely to be explained by a negative social minority position, being part of a group that is viewed as inferior by the majority population (Veling, 2013) and chronic stress due to social exclusion, discrimination, and social defeat (Selten, van der Ven, Rutten, & Cantor-Graae, 2013). Such experiences are common in LGB individuals, even if they have not disclosed their sexual identity, by identification with the minority group (Meyer, 2003). Indeed, aversive social experiences partially mediated the effect of LGB status on the risk for psychotic disorder in our sample. Several authors (Howes & Murray, 2014) hypothesize that exposure to social stressors during critical periods of brain development leads to sensitization, resulting in permanent excess of basal presynaptic transmission of dopamine, which is thought to increase the risk for psychosis. Pathogenic effects of social stressors on neurochemical systems are similar for both NAPD and LBGs (Mizrahi, 2016).
Sexual identity disclosure has been shown to improve the overall mental health of LBG youth (Meyer, 2003), and adult LBGs who have disclosed their sexual preference show lower cortisol levels and less psychiatric symptoms compared to LBGs who have not (Juster et al., 2013). It is plausible that these neurodevelopmental and biological mechanisms, if present, are more pronounced in LBGs considering the trying conditions under which LBGs become of age and live in thereafter. LBGs are known to achieve important milestones such as a steadfast identity, settling down with a partner and family planning later in life (Kertzner, 2001). In spite of the Netherland's renowned international 'gay-friendly' reputation, our results show that LBGs experience increased psychological strain during their life course by means of social adversity. The formation of biased cognitive schemas is more likely to occur after negative social experiences, and are exacerbated and perpetuated by having an 'outsider status' (Veling, 2013). On the other hand, self-disclosure at a young age, which appears to be a trend (Russell & Fish, 2016), may lead to increased social adversity and exclusion in individuals not yet psychologically equipped to handle the adversity. This in turn might explain why the prevalence of psychiatric disorder in young LBGs has not declined over recent decades, despite positive changes in social attitudes in Western countries (Brechwald, 2011). In addition to the above-mentioned socio-neurodevelopmental theories, other potential mediators of association between LBGs and psychotic disorders should also be considered, such as healthy identity and body-image formation. A recent Dutch survey study of LBGs (n = 2352) showed that in men higher levels of gender nonconformity predicted the experiences of CT by an adult family member, which in turn predicted the higher level of adult revictimization. If LBGs are more victimized as children by primary caregivers (Bos, de Haas, & Kuyper, 2019), they are also more likely to be deprived of the developmental conditions needed to form a stead-fast sense of self and a healthy body-image. Difficulties in establishing a stead-fast sense of self are reported by patients with psychosis (Nelson, Thompson, & Yung, 2013).
---
Strengths and limitations
The results of this study should be interpreted in the light of several methodological issues. Selection bias may have occurred. While the large patient group of the GROUP study can be argued to be representative of the NAPD population (Korver et al., 2012), at the third assessment (6 years after baseline), 48% of the original patient sample (n = 1120) was lost to follow-up. The results would be biased if HTS patients were more likely to drop out than LBG patients, or if healthy LBGs were less likely to participate in the study than HTS controls. We tackled possible responder bias, by allocating 'the refuse to answer' and 'I don't know', a substantial total of 13 participants to the HTS group. A recent population survey found that approximately 4% and 3% of Dutch men and women, respectively, are homosexual (Keuzekamp, Kooiman, & Lisdonk, 2012), this corresponds well with the LBG rate in our control group. We conceptualized predominant same-sex attraction as a measure of sexual minority identity (i.e. the selfidentification of LGB), yet we recognize that we did not also ask about same-sex behavior and predominant attraction, does not per se necessitate same-sex behavior or self-identification as an LBG individual. However, dissonance between sexual identity, in which case same-sex attraction is a key question to pose (Sell, 1997), and same-sex behavior occurs particularly in (young) adolescents (Kann et al., 2016), whereas the mean age in our minority patients was 34.9 years of age.
A further potential concern is the measurement error of sexual minority status. It is conceivable that sexual identity is a part of delusional ideas in some patients with NAPD. Sexual orientation was measured 6 years after baseline, making incorrect classification as LBG as a result of actual psychosis less likely.
Furthermore, mediators must precede the occurrence of the outcome in time. This is true for CT and bullying, but not necessarily for perceived discrimination, as this was measured lifetime and could, therefore, may have occurred after the onset of psychosis. Another limitation of the current study is the small LBG sample size. We did not have enough statistical power to control for urban living and cannabis use. LBGs tend to live in densely populated urban areas (Kuyper & Fokkema, 2011). Higher occurrence of substance abuse amongst LBGs is a well-replicated finding (Bos et al., 2019) and is by some hypothesized to be more 'normalized' within the LBG culture and/or used as a coping mechanism for minority stress (Meyer, 2003). It should be acknowledged that these variables had many missing values in third wave data, which limits their interpretation. Our data suggest that cannabis use was lower in LBGs than HTSs, which implies it is probably not a substantial factor in explaining the increased risk for psychosis in this population. As we did not have detailed information on cannabis use, and data were not available for a third of participants, conclusions should be regarded with caution. The results of this study implicate that LBGs have even more increased mental health risks, than previously known. Social defeat factors such as CT, discrimination, and bullying especially need to be addressed. |
As China continues to urbanize rapidly, an increasing number of adults from rural areas are making the move, they choose to leave their families and migrate to the city hoping for a better-paid job. The paper explores the psychological development issues faced by children left behind (LBC for short in the paper) in rural areas and how to address them by means of a literature review. The paper finds that we should raise the awareness of families, schools, and society about the psychological issues besetting the LBC in rural areas, making the families pay attention to the emotional education of the children, the school to assert the importance of psychological education and the government to rectify the undesirable practices and modify some laws that have something to do with urban household registration system. | Introduction
Starting from the 1970s, China's economy skyrocketed due to the reformed policy and the process of urbanization thus accelerated. At the same time, an increasing number of workers from rural areas chose to migrate to the city hoping for higher salaries. However, due to the high living expenses and the policies that do not support countryside dwellers' registration as permanent residents, most workers cannot move to the city with their family members. In this background situation, a great number of children were left behind in the countryside, living with their grandparents or other relatives, lacking the caring from parents. Without emotional relations and communication with parents, the psychological development of these left-behind children was highly worried by society. In this article, multiple research projects about the psychological development of left-behind children in China will be studied, summarized, and compared. Considering the left-behind children's growingup environment, the education they received, and the thoughts they may have, this article finds out that left-behind children face severe psychological problems in life and study. This study's aims are to summarize the recent studies of left-behind children and give suggestions about the problems faced by them.
---
Literature Review
To study left-behind children's psychological development, the first thing that needs to be studied is what kind of thoughts they may have and what problems they may meet without the role of parents in their lives. In Wang's case study, the researcher used survey questionnaires, face-to-face interviews, and monitoring group projects to learn about the left-behind children's emotional situations. After 6 months spent with the rural area left-behind children, the researcher found that the three main concerns are earning anxiety, loneliness tendency, and relationship anxiety. In other words, children's interpersonal and absorbing knowledge abilities didn't expand well in the absence of parents [1]. In the other case study in Jiangxi Province, the left-behind children were found with psychological problems like anxiety, lack of self-control, low study interest and motivation, extreme sensitivity, and stubbornness. The possible causes of these situations were proposed as inconvenient transportation, lack of cooperation and guidance, and lack of sense of belonging [2]. A similar result was also concluded in research from Central China Normal University by comparing several subgroups with different frequency and length of parent contact, showing that left-behind children have a disadvantage in emotional adjustment [3]. Also, by inputting 861 left-behind children as study objects, via linear regression studies, age and gender were found to perform as negative factors in their psychological situations. In other words, high school male students have more problem behaviors, like fewer school engagement and worse peer relationships.
---
Major Psychological Problem Faced by LBC in Rural Areas
---
Learning Disability
The psychological problem of the left-behind children in rural China is represented in their study aspect. Poor behavior in academics, bad relationships with teachers and peers, absence from classes, and bullies are the main performances. These problems will seriously influence their academic performance and later social life after they enter society. Without excellent academic scores to prove their cognitive abilities, they will not be able to find themselves a well-paid job. Also, they are not aware of how to get along well with their peer workers and bosses, and dealing with conflict with others will be a serious problem for them.
---
Low Self-Esteem
The lack of parents' role in LBC's life also affects their confidence and their understanding of themselves and society. They have feelings of being betrayed and being given up, which leads to their sensitive characteristics and loneliness tendency. They will be afraid that others will hurt their feelings with such a single look. As a result, they will hardly trust other people and try everything to avoid social contact with others. This will seriously affect their normal life and beget many mental-related diseases. The left-behind middle school students showed more loneliness, self-blame, allergies, and physical symptoms. There are significant differences in the mental health status of left-behind children of different genders, and the mental health level of girls is lower than that of boys. Li Qi conducted a questionnaire survey on the mental health problems of left-behind children in 5 rural primary schools and 3 township middle schools in Nanfeng County, and the results showed that the mental health problems of left-behind children mainly focused on learning anxiety, interpersonal communication, emotions and other aspects [2]. Also, parental contact is of great importance "Among the left-behind characteristics investigated in this study, the frequency of parental contact had the broadest impact on LBC's adaption. Parental contact was beneficial to LBC's mental health. Children who had the most frequent contact with their parents suffered less from loneliness and depression and reported the highest life satisfaction and self-esteem [3].
---
Stubborn
Living with their grandparents, most left-behind children were spoiled or lacked correct guidance in life. Also, the rapidly expanding internet, the access to smartphones, and the rising popular TikToklike short videos make the knowledge and information go to the children without any previous check or selection. This will lead to a distorted value and ideology and will affect negatively their future job opportunities and their chance of getting promoted. Also" Parents tend to put their children in two different perspectives, one is too dependent on the traditional education model, not enough rational and flexible guidance; The other is coddling and letting go [2]. These two polarized attitudes toward students also cause the stubbornness of some of them because they are already used to being overlooked or coddled.
---
Solutions
After knowing their problems, various researchers suggested necessary strategies to the government, parents, and social unions. More parent contact, less spoiled from grandparents, and more education investment are proposed by almost every research. In Wang's case study, after the intervention of teachers and classmates, the psychological situation of the child was highly improved [1]. That is to say, a nice school environment can work as a kind of substitute to offer some emotional support [1]. Schools are also encouraged to offer psychological education for both children and their guardians, to raise awareness and emphasis on emotional problems [4].
In conclusion, the main problems the left-behind children in China may meet are anxiety, poor relationships, learning disabilities, and low self-esteem. The government, schools, and parents are called to put more emphasis and input on their psychological situations [5]. However, conflict in the highly-developed society between high-paid jobs and limited education opportunities, rising living expenses in cities, the lack of high-quality education in the countryside, the inequality between provinces and districts, the relatively low covering of children's psychological education, and caring make this revolution a long way to go [6][7][8].
---
Conclusion
With the deepening of research on left-behind children, all sectors of society have also invested more attention on left-behind children and adopted various assistance measures to help left-behind children in rural areas solve their education, learning, psychology, behavior, and other problems they face. Among them, this paper mainly discusses the causes of mental health problems of left-behind children.
The main research focus of this paper is the left-behind children's mental health problems and countermeasures. This thesis is mainly about some studies about the psychological problems faced by left-behind children such as learning disabilities, low self-esteem, and stubbornness. In addition, the solutions to these problems are also mentioned in this work. The governments should take responsibility to lower the charge of leaving behind children for their studies and life, and make sure that the environment around the school is suitable for their academic lives. Schools, should consider highly on students' mental heath and build more psychological counseling rooms for students who have mental decease. Children's problem behaviors were also investigated because difficulties with behavioral adjustment negatively impact both physical and mental health" to get their problem solved or to be relieved.; For the families of left behind children [4], they are capable of using modern technology such as Facetime to keep track of their children's mental situation and, if there is something wrong, the parents can comfort and support their children in time. what's more, there are some shortcomings in this thesis, for example, I didn't do real-life research such as asking left-behind children in rural areas or doing some questionnaire research to get first-hand results. I will improve this drawback through browsing the field studies of other researchers, and after knowing the correct process of doing real-life research I will make a questionnaire about this topic and visit those leftbehind children, and make friends with them in order to get to know them deeper and better. This future research will mainly focus on the relationship between the family members of the rural family in which the parents are always absent and leave their children in the village. In order to help the children who are left behind more, the author sincerely suggests that the studies of left-behind children should not just stop at the suggestions, but should spur the real movements of government, teachers, families, and students. |
Background: Men who have sex with men (MSM) and transgender women in Sub-Saharan Africa are subjected to high levels of sexual behavior-related stigma, which may affect mental health and sexual risk behaviors. MSM and transgender women who are open about, or have disclosed their sexual behaviors appear to be most affected by stigma. Characterizing the mechanism of action of stigma in potentiating HIV-risks among these key populations is important to support the development of interventions. Methods: In this study, a total of 532 individuals were recruited across Eswatini (Swaziland) through chain-referralsampling from October -December 2014, including 419 cisgender MSM and 109 transgender women. Participants were surveyed about demographics, stigma, outness of same-sex practices to family members and healthcare workers, and mental and sexual health. This study used latent class analysis (LCA) to determine latent constructs of stigma/outness, and used multinomial logistic regression to determine associations with underlying constructs and sexual risk behaviors. Results: Three latent classes emerged: 1) Those who reported low probabilities of stigma (55%; 276/502); 2) Those who reported high probabilities of stigma including physical violence and fear/avoidance of healthcare, and were not "out" (11%; 54/502); and 3) Those who reported high probabilities of stigma including verbal harassment and stigma from family and friends, and were "out" (34%; 172/502). Relative to the "low stigma" class, participants from an urban area (adjusted odds ratio [AOR] = 2.78, 95% Confidence Interval [CI] = 1.53-5.07) and who engaged in condomless anal sex (AOR = 1.85, 95% CI = 1.17-2.91) were more likely to belong to the "high stigma, 'out'" class. In contrast, those who had a concurrent male or female partner were more likely to belong to the "high stigma, not 'out'" class AOR = 2.73, 95% CI = 1.05-7.07). Depression was associated with membership in both high-stigma classes (AOR = 3.14, 95% CI = 1.50-6.55 "not out", AOR = 2.42, 95% CI = 1.51-3.87 "out"). Conclusions: Sexual behavior stigma at a community level is associated with individual-level risk behaviors among MSM and transgender women, and these associations vary by level of outness about sexual practices. Achieving sufficient coverage of evidence-based stigma interventions may be key to realizing the potential impact of HIV prevention and treatment interventions for MSM and transgender women in Eswatini. | Background
The Kingdom of Eswatini, formerly Swaziland, has one of the world's most widespread HIV epidemics, with more than 27% of adults aged 15-49 living with HIV in 2014 [1]. Encouragingly, in Eswatini and other countries with a generalized HIV epidemic, there has been a decrease in HIV incidence in recent years due to a coordinated response and increase in HIV prevention program coverage including antiretroviral therapy and prevention of mother-to-child transmission [2,3]. However, the HIV prevalence among key populations including gay men and other men who have sex with men (MSM), as well as transgender women, is significant. In particular, HIV incidence among young MSM is increasing in almost every part of the world [4][5][6]. Subsequently, increasing effort is being dedicated to researching and addressing the HIV epidemic among these key populations even in the context of more broadly generalized epidemics [7,8].
For cisgender MSM (cis-MSM) and transgender women, the potential effectiveness of HIV prevention and treatment programing may be limited by structural-and community-level factors, such as stigmas pertaining to sexual behaviors and gender identity, which contribute to suboptimal health-seeking behaviors [9,10]. For example, culturally-insensitive health workers may result in cis-MSM and transgender women avoiding HIV prevention services, or cis-MSM and transgender women living with HIV may avoid HIV treatment services altogether. Reduced utilization of health and HIV services by cis-MSM and transgender women, due to enacted or perceived discrimination, may limit knowledge of the risks of condomless anal intercourse and opportunities for access to novel and emerging prevention services such as pre-exposure prophylaxis as it becomes increasingly available [11,12]. Sexual behavior stigma may also increase risk for depression and other adverse mental health outcomes [13,14]. In turn, adverse mental health outcomes may further increase risk for HIV by decreasing self-efficacy and increasing sexual risk behaviors including condomless anal sex with HIV status-unknown partners [15][16][17], and by affecting the desire or ability of cis-MSM and transgender women to engage in healthcare [18]. Sexual behavior stigma among these key populations may also limit stable couple formations resulting in larger sexual networks, in which people are less likely to know the HIV status of their sexual partners and may ultimately result in increased risk of HIV infection [19,20].
Experienced sexual behavior stigma is often greater for cis-MSM and transgender women who have disclosed and are open about their identity or practices, even if these individuals are also more likely to be financially self-sufficient, comfortable about their sexuality, and have reduced minority stress after disclosure [20][21][22][23]. Potentially, this is because they are more easily identified as targets for discrimination or harassment by broader community members [22,24]. However, non-disclosure of sexual behaviors can lead to poorer mental health, reduced engagement in HIV prevention services, and increased sexual risk-taking behaviors [25][26][27]. Thus, there is a paradox whereby coming out is associated with greater experiences of stigma even if it can result in improved mental health and HIV-related outcomes and greater awareness and acceptance of gay and transgender communities.
Among MSM in Eswatini, sexual orientation has been estimated to be three-fifths identifying as gay or homosexual, two-fifths as bisexual, and a small proportion reporting as heterosexual [28]. A study of transgender women and cis-MSM across 8 African countries showed Eswatini had a higher proportion of transgender participants than Malawi, Lesotho, Togo, and The Gambia [29]. There is a need to better understand the role of stigma in driving the persistent HIV epidemic among cis-MSM and transgender women in Eswatini. Especially considering the context of Eswatini with an estimated HIV prevalence of 13% among cis-MSM and transgender women [30], where same sex relations is a common law offence [31], and where stigma poses a potentially significant barrier to prevention programs and services.
The objectives of this study are: 1) to conduct a latent class analysis (LCA) to determine the latent constructs of stigma and disclosure status among cis-MSM and transgender women in Eswatini, and 2) to determine associations with underlying stigma constructs and sexual risk behaviors potentially putting these individuals at increased risk for HIV infection. We chose an LCA approach in order to explore how clusters of stigma and disclosure status were related to risk behaviors. LCA is a person-centered methodological approach to identify unobservable groups through patterns of responses across individuals. This approach aims to identify homogeneous groups that would be challenging to determine by assessing indicators individually [32]. Stigma attributable to sexual behavior is driven through social processes, and may manifest through multidirectional, and mutually reinforcing mechanisms [33]. Therefore, utilizing a person-centered latent approach to assess sexual stigma, outness, depression, sexual risk behaviors, and sociodemographics help to better understand these complex patterns. By capturing the multiplicity of the stigma/outness items, the objective was to better understand how these items can be conceptualized and captured in relation to sexual risk behavior among these individuals.
---
Methods
---
Study population and design
A total of 532 individuals were recruited across 5 cities/ towns and surrounding regions (Lavumisa, Manzini/ Matsapha, Mbabane/Ezulwini, Nhlangano, and Piggs Peak) in Eswatini through peer-referral sampling from October -December 2014. In order to be eligible for the study, participants had to report being assigned the male sex at birth, being aged 18 years or older, having insertive and/or receptive anal sex with a man within the past 12 months, speaking siSwati or English, and being capable of providing written informed consent. This study was approved by the Johns Hopkins Bloomberg School of Public Health Institutional Review Board and the Eswatini Scientific and Ethics Committee.
---
Data collection and key measures
During the study visit, trained interviewers administered a structured questionnaire through a face-to-face interview in a private location. The questionnaire included questions about demographics, stigma, disclosure about having sex with men, and mental and sexual health.
---
Demographics
A two-step gender assessment was used to distinguish between cis-MSM and transgender women in this study. This assessment included reported sex at birth, and reported current gender identity [34,35]. Individuals who reported a gender identity as female or intersex were considered transgender women in these analyses. Participants who reported a gender identity of male are defined as cis-MSM. For these analyses, we included information on age, highest level of completed education, gender identity, employment status (employed or not employed), and whether the study site was located in an urban or peri-urban area. In order to perform the LCA, each of these variables was dichotomized into binary indicators.
---
Sexual behavior stigma
Stigma attributable to having sex with men was measured by asking a series of "yes" or "no" questions, which have been used in several previous studies of cis-MSM and transgender women in Sub-Saharan Africa [10,36]. This sexual behavior stigma was comprised of stigma from personal, social, and healthcare settings. Personal-life stigma included feeling excluded at family gatherings, feeling that family members made discriminatory remarks or gossiped, or feeling rejected by friends. Social stigma included feeling that the police refused to protect you, feeling scared to walk around in public places, being verbally harassed, blackmailed, physically hurt, or tortured, as well as experience of violence. Finally, healthcare stigma included feeling that you were not treated well in a healthcare center, hearing healthcare providers gossip, feeling afraid to go to healthcare services, or avoiding healthcare services.
---
"Out" about having sex with men
Participants were asked, "Have you told any member of your family that you have sex with men or that you are attracted to other men?" as well as, "Does anyone in your family know that you have sex with other men or that you are attracted to other men, other than those who you have told?" Participants who reported "yes" to either were considered being "out" to family members. Participants who responded "yes" to the question, "Was there a time when any health care provider learned that you have sex with other men or that you are attracted to other men (for example, you told them, or they found out because someone else told them)?" were considered being "out" to heath care workers.
---
Depression
A positive depression screen was defined as a Patient Health Questionnaire (PHQ-9) score of 10 or greater [37]. The PHQ-9 measures the frequency of depression symptoms within the past two weeks. This scale has been used previously in Sub-Saharan African populations [38,39] and had good internal consistency in our study sample (Cronbach's alpha = 0.89).
---
Sexual risk practices
Participants were asked how often condoms were used within the past 12 months for receptive and insertive anal sex. These measures were dichotomized into a single indicator for condomless anal sex that included "any" or "none". In addition, participants were asked if there was any time in the last 12 months that they had multiple regular sexual partnerships at the same time; that is involved in two or more ongoing sexual partnerships, either with males or female partners. These measures were dichotomized into a single indicator for concurrent sexual partnerships that included "any" or "none".
---
Statistical analyses
We tabulated descriptive characteristics of participants using frequencies and percentages. Bivariate logistic regression was used to test associations between being "out" about having sex with men and sexual behavior stigma. These analyses were conducted using SAS software Version 9.4 (Cary, NC, USA).
In a two-step process, we first used LCA to identify classes based on self-reported measures of stigma, and whether or not it was known to family or healthcare workers that the participant had sex with men. Twothrough six-latent class models were produced iteratively. The number of classes was selected based on theoretical and practically meaningful patterns as well as model fit criteria (i.e., goodness-of-fit indices). Fit indices included the likelihood ratio test statistic (G 2 ), the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the consistent AIC (CAIC), and entropy (Table 1) [40]. Smaller values of AIC and BIC and higher values of entropy indicate better fit.
Next, multinomial logistic regression was used to identify demographic characteristics, sexual risk behaviors, and mental health characteristics (i.e., depression) that were associated with class membership. These variables were first analyzed individually and then simultaneously in a multivariable model. All covariates except for age and reporting more than a high school education were found to be significant predictors of membership in at least one latent class in the bivariate analyses (not shown). Demographic variables considered to have theoretical importance were kept in the final model regardless of their level of statistical significance. As a result, no variables were dropped from the final model. For both the LCA and logistic regression, participants with missing data were excluded (N = 30). Less than 1% of data were missing for all variables in the LCA and fewer than 4% were missing for variables in the logistic regression. The two-step process analyses were performed using SAS PROC LCA [41,42].
---
Results
---
Participant characteristics
Prevalence of participant characteristics is presented in Table 2. A total of 532 individuals participated in this study, including 419 (79.4%) cis-MSM and 109 (20.6%) transgender women. Participants ranged in age from 18 to 50 years, with a median age of 24 years and an interquartile range of 22-28 years. Less than one-quarter (n = 113, 21.2%) had completed secondary school or less, whereas 51.1% (n = 272) had completed high school and 27.6% (n = 147) completed more than a high school education. The majority of participants were sampled from an urban study site (n = 400, 75.2%) and a little more than one-half were employed or students (n = 301, 56.6%). Experiences of stigma ranged in prevalence from 10.9-43.7% depending on the type of stigma. Almost 44% (n = 233) were out to family members whereas 20.5% (n = 108) were out to healthcare providers.
---
Associations between sexual behavior stigma and being "out"
Being out to a family member was associated with feeling excluded by family members (Odds Ratio [OR] = 2.01, 95% Confidence Interval [CI] = 1.35, 3.00), feeling gossiped about by family members (OR = 4.07, 95% CI = 2.77, 5.98), feeling rejected by friends (OR = 4.44, 95% CI = 2.83, 6.97), feeling like police refused to protect (OR = 1.78, 95% CI = 1.09, 2.89), feeling scared to walk around in public places (OR = 1.61, 95% CI = 1.13, 2.29), being verbally harassed (OR = 4.21, 95% CI = 2.92, 6.06), and being blackmailed (OR = 2.51, 95% CI = 1.65, 3.83). It was not significantly associated with being physically hurt (OR = 1.24, 95% CI = 0.81, 1.91), being tortured (OR = 0.93, 95% CI = 0.59, 1.45), being treated poorly in a healthcare setting (OR = 0.71, 95% CI = 0.40, 1.25), being gossiped about by a healthcare worker (OR = 1.22, 95% CI = 0.74, 2.00), being afraid to seek healthcare services (OR = 0.87, 95% CI = 0.61, 1.24), or avoiding seeking healthcare services (OR = 0.97, 95% CI = 0.68, 1.39) (Table 3).
Being out to a healthcare worker was associated with being treated poorly in a healthcare setting (OR = 2.49, 95% CI = 1.39, 4.46), being gossiped about by a healthcare worker (OR = 2.16, 95% CI = 1.25, 3.71), avoiding seeking healthcare services (OR = 1.81, 95% CI = 1.18, 2.79), feeling excluded by family members (OR = 1.64, 95% CI = 1.03, 2.60), feeling like family members gossiped (OR = 2.50, 95% CI = 1.62, 3.87), feeling rejected by friends (OR = 3.91, 95% CI = 2.47, 6.19), being verbally harassed (OR = 3.63, 95% CI = 2.31, 5.71), and being blackmailed (OR = 2.66, 95% CI = 1.67, 4.22). It was not significantly associated with feeling like police refused to protect (OR = 1.68, 95% CI = 0.97, 2.91), feeling scared to walk around in public places (OR = 1.47, 95% CI = 0.96, 2.26), being physically hurt (OR = 1.49, 95% CI = 0.90, 2.45), being tortured (OR = 1.30, 95% CI = 0.77, 2.19), or being afraid to seek healthcare services (OR = 1.37, 95% CI = 0.89, 2.11).
---
Latent class analysis
Identification of latent classes AIC, BIC, and CAIC values began to level-off at 3 latent classes and were primarily leveled-off at 4 classes. Purely based on model fit indices, a 4-class model might have been selected. However, after comparing conditional probability distributions between the 3-class and 4-class models, a 3-class model was selected based on the existence of meaningful risk profiles for participants [40,[42][43][44]. In brief, for the 4-class model, the high risk "not out" class appeared to divide into two groups: both had high levels of family gossip and verbal harassment whereas one group had higher levels of perceived healthcare stigma. We considered these to be sub-groups of the high risk "not out" class and maintained the 3-class model for ease of interpretation. The first class (55%; 276/502) consisted of cis-MSM and transgender women who demonstrated overall low probabilities of stigma as a result of having sex with men ("low stigma" class) (Table 4). The conditional probability of being out to family members and healthcare workers was 38% and 15%, respectively, which suggests that some of the participants in this class were out to family members and healthcare workers although it was not a defining feature of this class. Individuals in the second class (11%; 54/502) exhibited high probabilities (> 0.50) of physical violence, torture, and fear/avoidance of seeking healthcare, and were less likely to have their sexual identities known by family members or healthcare workers ("high stigma, not 'out'" class). Finally, the third class (34%; 172/502) demonstrated high probabilities of being excluded by or gossiped about by family members, verbal harassment, feeling scared to walk around in public, fear/avoidance of healthcare workers, and were more likely to have their sexual identities known by family members or healthcare workers ("high stigma, 'out'" class).
---
Relationships with class membership
In the final adjusted multinomial model, depression was associated with both high stigma classes relative to the low stigma class (P < 0.01) (Table 5). Reporting concurrent sex partners (P < 0.01) was associated with membership in the high stigma not out class whereas condomless anal sex was associated with membership in the high stigma out class (P < 0.01). Being employed and identifying with female/other gender was associated with reduced likelihood of membership in the high stigma not out class relative to the low stigma class (P < 0.05 and P < 0.05, respectively). Completing high school and more than a high school education were both associated with membership in the high stigma not out class relative to the low stigma class (P < 0.01 and P < 0.05, respectively). Being sampled from an urban area study site was associated with membership in the high stigma out class (P < 0.01). Age was not associated with class membership (P = 0.86).
---
Discussion
Sexual behavior stigma is affecting cis-MSM and transgender women across Sub-Saharan Africa [13,[45][46][47], and is likely exacerbated by the illegality of same sex practices with punishments including fines or imprisonment [48]. Stigma and discrimination towards cis-MSM and transgender women have previously been associated with poor HIV-related health outcomes including reduced rates of HIV testing, increased risk for HIV infection, lower likelihood of discussing or disclosing HIV/ AIDS status with male partners, and engagement in HIV treatment for those living with HIV, and increased condomless anal sex [49][50][51][52]. In these analyses, we found that outness about sexual behaviors grouped together with increased burden of multiple forms of stigma, and that these latent stigma/outness classes were associated with different types of sexual risk behaviors.
In Eswatini, there is persistent societal discrimination against the LGBT community backed by colonial-era legislation that prohibits anal sex between men [53]. As a result, LGBT individuals risk the loss of family members, friends, and employment if they disclose or are out about their sexual behaviors or gender identity. This structurallevel stigma is manifested at the individual-level in our study. For example, participants who reported that family members knew about their sexual behaviors greatly increased the odds of reporting feeling excluded and gossiped about by family members. Similarly, having healthcare workers who knew about one's sexual behaviors increased the odds of reporting poor treatment from healthcare workers, being gossiped about by healthcare workers, and avoiding seeking healthcare services. This is additionally problematic because disclosure of sexual practices to healthcare workers is necessary for obtaining accurate sexual histories and meaningful assessments of HIV risk, but in reality, disclosure can be very challenging. In the context of HIV prevention and treatment strategies in Eswatini, if cis-MSM and transgender women face stigma for disclosing their sexual practices, they may be less likely to disclose and subsequently less likely to be identified as appropriate candidates for novel biomedical HIV prevention services including pre-exposure prophylaxis.
In the latent class regression, those with concurrent male or female sexual partners were more likely to belong to the high stigma not out class. This finding is consistent with results from recent qualitative work examining intersecting stigmas among MSM in Eswatini, where participants reported that the secretive nature of MSM relationships led to greater numbers of sexual partners and more casual types of partners in some cases [19]. Participants indicated that because their MSM relationships are kept secret, families do not play a role in relationship counseling and peacekeeping in the same way that they might for heterosexual couples. It is also common for MSM in Eswatini and other regions to have girlfriends or wives, potentially to fulfill cultural expectations, further challenging the formation of stable male couples [19,20]. In other settings, MSM who also have sex with women showed a higher risk of experiencing intimate partner violence, including physical violence and being threatened with disclosure of sexual orientation, than MSM with only male partners [54]. This may provide insight on the high probability of experienced violence among the high stigma, not out class in this study.
Prevention science theoreticians and practitioners have called for combination HIV prevention strategies, which would integrate a package of biomedical, behavioral, and structural interventions to address multiple layers of HIV risk [55][56][57][58][59]. These combination tactics are likely even more efficient for high risk MSM and transgender women in reducing HIV incidence [60][61][62]. But given the increased instances of condomless anal sex among those in the high stigma out group in this study, this suggests that structural interventions to address stigma will also be needed to reduce HIV risk behaviors; such as sensitivity training for healthcare workers and political advocacy to reduce or mitigate the effects of stigma. In Eswatini, the implementation and optimization of combination approaches are currently challenged by punitive policies and stigma affecting MSM [55,57].
Those who identified with a non-male gender (including female or intersex) were least likely to belong to the high stigma and not out class. They were more likely to belong to the high stigma and out class although this was not found to be statistically significant. Previous work indicates that transgender women, or individuals assigned male sex at birth but who identify as a woman, are more likely to experience high levels of stigma in comparison even to MSM [29,63,64]. Thus, our findings may reflect the notion that transgender women are more likely to be visible in the community as compared to MSM who follow more traditional gender norms, and thus may be more easily targeted for stigma, discrimination, and other forms of abuse. Living in an urban residence being associated with belonging to the high stigma out class was not surprising and likely reflects patterns seen in the US and other high income settings where gay men and other MSM move to larger cities for social networking opportunities and a more tolerant social climate [65,66].
Screening positive for depression on the PHQ-9 was associated with membership in each of the high stigma classes, compared to the low stigma class. This is consistent with previous data suggesting that depression is higher among MSM as compared with heterosexual men in many parts of the world potentially as a result of stigma and minority stress [13,[67][68][69][70]. MSM interviewed for a qualitative study in Eswatini indicated that living with a stigmatized identity led to feelings of depression and self-stigma [19]. Our findings here further highlight the strong and consistent impact that stigma appears to have on mental health, regardless of whether one is open about their sexual behavior. Unfortunately, there is virtually no literature describing effective depression interventions for MSM in Sub-Saharan Africa [71][72][73].
The latent class, low stigma, showed moderately high levels of disclosure to family and healthcare providers, however was not a defining characteristic of the class. The context of overall low stigma may provide for a supportive environment for disclosure of sexual behaviors. Although, the low stigma class still showed moderate levels of fear to be in public spaces and verbal harassment, and a higher conditional probability for these stigma measures than those in the high stigma, not out.
Potential limitations to our study include the use of cross sectional data, impeding the inference of causal relationships, and the non-random selection of study participants, which is an assumption of LCA. However, "hidden" populations such as cis-MSM and transgender women are difficult to sample through traditional methods given the lack of sampling frame including census-level data in Eswatini and peer-driven sampling approaches are more appropriate. Social desirability bias may have affected participant responses; for example, by causing underreporting of condomless anal sex and stigmatizing experiences. Although LCA leaves open the possibility that one or a few particular stigma items may be driving the associations with risk behaviors, we opted to use LCA to explore how clusters of stigma/outness were related to risk behaviors. The stigma metrics used in this study were self-reported stigma measures defined as attributable to sexual behavior. However, for individuals experiencing layered or intersecting stigma, the attributable characteristic of stigma may be difficult to identify. An additional limitation is that this sample was underpowered to conduct a separate analysis for transgender women without cis-MSM.
---
Conclusion
Even in the context of increasingly available biomedical HIV intervention strategies including oral pre-exposure prophylaxis, the reduction of HIV-related risk practices remains crucial for the prevention of HIV acquisition and transmission. In these analyses, stigma appears to consistently be associated with increased HIV-related risk practices and risks for depression. Consequently, evidence-based stigma interventions that are able to operate under challenging legal and human rights settings may be key to combating the persistent HIV epidemic for cis-MSM and transgender women in Eswatini.
---
Availability of data and materials
The data that support the findings of this study are available from the Eswatini Ministry of Health but restrictions apply to the availability of these data, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of the Swaziland Ministry of Health.
---
Abbreviations AIC: Akaike information criterion; AOR: Adjusted Odds Ratio; BIC: Bayesian information criterion; CAIC: Consistent Akaike information criterion; CI: Confidence Interval; HIV: human immunodeficiency virus; LCA: latent class analysis; MSM: men who have sex with men; PHQ: Patient Health Questionnaire Authors' contributions SS led the development of analysis and writing of the manuscript. CL led the finalization of analyses and manuscript writing, and led the revision and submission process. CH served as study coordinator, contributed to questionnaire development, and provided input on the analyses and manuscript development. SK created data entry and data management systems for the study, contributed to the questionnaire development, and provided input for the analysis. LVL, DK, MM, BS, LM, and SM supported conceptualization of the study including data collection methods, implementation of the study, and interpretation of study results. SB conceptualized the study designs, monitored study stages, and gave guidance on the analysis and was involved throughout manuscript development. All authors contributed substantially to either the study design, data collection, analysis or interpretation of data; participated in drafting the article or revising it for intellectual content; and approved the final version to be published, as outlined by the ICMJE authorship criteria.
---
Ethics approval and consent to participate
This study was approved by the Johns Hopkins Bloomberg School of Public Health Institutional Review Board (FWA 00000287): and the Eswatini Scientific and Ethics Committee (FWA 00015267). All participants in this study provided written informed consent.
---
Consent for publication
Not applicable.
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
The immortal HeLa cells case is an intriguing example of bio-objectification processes with great scientific, social, and symbolic impacts. These cells generate questions about representation, significance, and value of the exceptional, variety, individuality, and property. Of frightening (a lethal cancer) and emarginated (a black, poor woman) origins, with their ability to "contaminate" cultures and to "spread" into spaces for becoming of extraordinary value for human knowledge, well-being, and economy advancements, HeLa cells have represented humanity, and emphasized the importance of individual as a core concept of the personalized medicine. Starting from the process leading from HeLa "cells" to HeLa "bio-objects, " we focus on their importance as high quality bio-specimen. We discuss the tension between phenomenological characteristic of fundamental biological research and the variety of material and methodologies in epidemiology and personalized medicine. The emerging methodologies and societal changes reflect present EU policies and lead toward a new paradigm of science. |
Current biotechnology is characterized by its capacity to generate biological processes and analyze large amounts of information supported by the development of advanced equipments. Biotechnology has opened up unique potentials for producing new objects by manipulating and transgressing boundaries between domains that were formerly understood as incompatible, or by creating completely new materials. These new entities are named bio-objects (1), and they are defined as biology innovations produced through processes continuously negotiated in the intersection of science, politics, and society (2). In this defini-tion, bio-objects are approached as temporary categories that are produced in an ongoing bio-objectification process, aiming at controlling life in specific time and space.
When mentioning "bio-objects" as novel biological entities, a special place has to be assigned to the HeLa cells for their capability to challenge conventional natural, cultural, scientific, and institutional classifications (bio-objects) and to generate controversy due to their potential challenging of established order and practices (bio-objectification). Thus, under the lenses of bio-object and bio-objectification concepts, various remarkable features may be attributed to HeLa cells and to the controversial bio-ethical arguments their establishment and use still generate today, six decades later.
---
The HeLa ceLLS
HeLa cells are an immortalized line established in the late 1950s, from a rare cervix adenocarcinoma of a young woman. They were named after her: Henrietta Lacks. These cells became, and still are, one of -if not the most important one -laboratory model of modern cell biology research since its first establishment in 1959. They were applied to study crucial biological processes of healthy and pathological systems, the functions of genes, and the development of pioneer "omics" approaches, as also proved by the over 60 000 publications produced, according to MEDLINE database (3). They were also necessary for relevant research that was awarded with two Nobel prizes, one for discovering the link between human papilloma virus and cervical cancer (by Harald zur Hausen in 2008) and another about the role of telomerase in preventing chromosome degradation (by Elizabeth Black-burn, Carol Greider, and Jack Szostak in 2011). Recently, a first detailed genomic and transcriptomic characterization of a HeLa cell line relative to the human reference genome (4) and an haplotype-resolved genome and epigenome of the aneuploid HeLa cancer cell line (5) has been published. Among the various applications based on HeLa cells (such as for instance for developing treatments for syphilis, AIDS, and cancer), it is worth remembering one of the earliest and important one, ie, the development of the vaccine against the polio virus (6). HeLa cell lines are commercially available and also circulate freely within the scientific community.
The biological characteristics of the HeLa cells showed to be clinically extraordinary, as described by Howard W. Jones who conducted the gynecological examinations and found that the tumor was soft, difficult to identify by the bare fingers, its color was purple and "…general examination was completely negative. Inspection of the cervix, however, revealed a lesion […] smooth, glistening, and very purple […]. Its appearance was different from any of the other 1.000 or so carcinomas of the cervix I had previously seen" (7). It also did not respond to radiotherapy. But what was amazing of this tumor, was its capability -even compared to other cancer cells -of rapid propagation and unusual invasiveness and to be durable not only inside Henrietta's body, but also outside, in laboratory.
Nowadays, we know that the cause of the proliferation capability of these cells is related to an active version of the telomerase which, during the cell divisions, prevents the incremental shortening of the chromosome telomeres (8), which is implied in aging and eventual cell death. Because of their persistency, contamination of other cell lines by HeLa cells is frequent, thus they have been also referred as "laboratory weed" (3). Aneuploidy (chromosome number 82 with four copies of chromosome 12 and three copies of chromosomes 6, 8, and 17) is documented in HeLa cells, as result of horizontal gene transfer from human papillomavirus 18 (HPV18) to human cervical cells (9). In culture conditions, HeLa cells divide unlimitedly ("immortal" feature) and may mutate: hence from the same tumor cells removed from Henrietta, many strains of HeLa cells have been generated. Estimations of their total number spread in the laboratories and repositories all over the world give amazing quantities which far exceed the total number of cells that were in Henrietta's body (10). Worth stressing, HeLa cells have even been proposed to be regarded as the contemporary establishment of a new species (HeLacyton gartleri) because of their ability to replicate in-definitely, their own clonal karyotype, their chromosomal incompatibility with humans, their ecological niche, and their ability to persist and expand (11).
The HeLa cells have been also presented as a paradigmatic example of fraud and prevarication of bio-ethics since neither Henrietta Lacks nor her family have been informed about the use of HeLa cells, and anonymity was compromised through the naming of the cell-line (12). The HeLa cells case became popular recently, with the publication, in 2010, of the book "The Immortal Life of Henrietta Lacks" by journalist Rebecca Skloot which was winner of several awards and became a best-seller. In the last months, moreover, the sequencing of HeLa genome (4,5) was object of a new important case concerning consent and privacy since, even though the genomes of the cells used in these studies are not identical to Lacks' original genome, still their sequence may reveal some heritable aspects which would violate descendants' privacy. Because of this fact and also because of the extraordinary scientific and bioethic significance of HeLa cells, in August 2013, an agreement was resolved between the US National Institutes of Health (NIH) and Henrietta's family members. Accordingly, the sequence data were placed "in a controlled-access database, " ie, the NIH's database of genotypes and phenotypes (dbGaP; http://www. ncbi.nlm.nih.gov/gap), "which would require researchers to apply to the NIH to use the data in a specific study and to agree to terms of use defined by a panel including members of the Lacks family. " This agreement is expected to urge new "discussions regarding consent for future use of bio-specimens, with a goal of fostering true partnerships between researchers and research participants" (13).
FroM HeLa "ceLLS" To The HeLa "bio-objecT"
The concepts of bio-object and bio-objectification have made possible to describe and discuss the HeLa cells with a consistent set of features that describe the process of how they come into being as biological phenomenon, research object, and commercial product, and how this shift is part of a more complex interaction between biology, science, technology, and society.
HeLa cells possess features of a bio-object which seem to be of particular relevance, first of all the potentiality to cross barriers. In Henrietta's human body, in fact, a virus induced DNA modifications resulting in their immortalization. Thus, hybridity ( 14) may be suggested as an outcome of the interaction between different domains (virus/mammalian), while the 'transformation' of Henrietta Lacks' cells by the virus, produced a new entity, a boundary crawler (15) between human/non-human.
Some important characteristics of these cells also concern bio-social implications we recognize to the bio-objects. Principally, this tumor raises property issues. It was part of Henrietta Lacks' body, belonged to her because it was inside her and existed due to her, thus being her property. On the same time, however, it may be identified as something separate from her, in the form of a parasitic, invasive, and transgressive biologic material to be dissected. Outside the body, its survival became technology-dependent (from cultural media, conditions, and repositories). Separate from her, this "medical waste" became a precious material to be shared, sold, and disputed. It acquired the identity of a tool to study, but also to generate other bioobjects in a circular process where the new knowledge is the starting point of new bio-objectification leading to the production of further bio-objects.
HeLa cells can also be described as part of a hermeneutical process where their meaning is produced through a continuous and dynamic process of interpretation. They represent a multitude of meanings depending on context and interpretative approach, hence they require new policies and communication practices (16).
The importance of HeLa cells raises from the combination of some crucial factors, concerning their easy culture (fast growth, immortality, and desirable stability); their availability through large scale industrial production and distribution; their fame in the laboratories all over the world; and finally their low cost.
---
PheNoMeNoLoGY, ePiDeMioLoGY, AND PerSoNALiZeD MeDiciNe
The story of Henrietta Lacks' life is in sharp contrast to the story of the life of her cancer cells. As a poor black woman, she represents the margin of the society, the "other. " Her cancer cells on the other hand, have characteristics that make them especially valuable for the research. They are valuable because they made possible to study essential aspects of what it implies to be human. Seen in relation to the emphasis on quantitative medical research (eg, biobank research) in today's society, Henrietta Lacks' cells underline the value of qualitative research in medicine.
Qualitative method, phenomenology, and hermeneutics are well known from caring sciences like nursing scienc-es. With direct links or roots in Heidegger's philosophy, this phenomenological method is developed to uncover human concerns and practices that are central for being and dwelling in the world. Focus is on experiences. As a method, it helps in identifying contextually bound clusters of themes, and it makes interpretation a key issue for scientific analysis as it is essential to understand phenomena as part of processes and context (17). The interpretative process is dynamic, and strives to go from the part to the whole, in a manner where critical reflection of the process is emphasized, aiming at achieving insights of general value.
In contrast to HeLa as a single cell line, nowadays in the field of molecular biology, epidemiological research on common complex diseases (CCD) is based on large databases, not the least bio-bank data. This large scale approach is high on the agendas of academic as well as public and popular debates and of regulatory work at Communitarian and national levels in Europe. In part, this reflects how new technology has changed epidemiological research in the last 10-15 years, and how ethical and legal regulations collide with medical and commercial visions in this field. Cancer investigations conducted on HeLa cells represent an interesting contrast with epidemiological research on CCD. This latter, in fact, focuses on groups and is closely linked to the discourse of personalized medicine. Here, large collections of samples are converted into data, combining huge demographic databases, health record databases and survey collections, ie, combining quantitative and qualitative materials.
In personalized medicine, representation and categorization are key themes, as identification of a group and its representative are crucial. In public discourses on epidemiology and personalized medicine, on the other hand, under-representation is a hot topic as medical research is considered not fully reliable for the various groups, being conducted mostly only on a restricted population (white and young men), thus neglecting the other groups. Moreover, this fact would reinforce the main social and political categories of the 20th century, namely gender, race, ethnicity, and class (18,19). Against this background, the modeling based on HeLa cells is politically and socially interesting and epistemologically and ontologically challenging. The scientific value of HeLa cells, which have been generated from a rare cancer of a poor black woman, have been applied for creating models, to be valuable not only for the specific but also for the general biology. If new knowledge can be extracted from research on Henrietta Lacks' tissue, does that imply that unicity is an ontological premise for cancer research? Is there a hierarchical relation between unicity and variety?
The method used in cancer research based on Henrietta Lacks' cells represents a continuous interpretative process, going from the detail to the whole and back again, in a circular critical rethinking of contexts and processes according to the phenomenological approach in caring sciences. In epidemiological research aiming at personalizing medicine and treatments, knowledge on basic biological functions, as well as possibility to test and experiment are required. The HeLa cells make possible both, being used as experimental model and being tools for assays, like in a hermeneutical (interpretative) circuit. Thus, from the HeLa cells, an exceptional phenomenon, knowledge has been gained about the general, which in turn has been applied for defining personalized medicine and treatments, which again generate new knowledge.
---
coMMoN coMPLeX DiSeASeS (ccD), ePiDeMioLoGY, AND HeLa ceLLS
The importance of cancer in global society, and the focus on epidemiological research, sheds new lights on the value of the HeLa cells since, according to the official World Health Organization (WHO) data, cancer is an increasing cause of death, globally (20). Besides, epidemiological research on CCD (cancer, diabetes, Alzheimer, Parkinson, and cardiovascular diseases) is a main tool for developing diagnostic, identifying causes, and individualizing treatments.
Research based on HeLa cells is not only still important but has been even revaluated (3).
Epidemiological research on CCD is a quite noteworthy area for society, individual's health and well-being, as well as for economy. Medical biotechnology has generated new opportunities and also loud ethical and legal debates and a new landscape of directives, regulation, and law (18,21). Concerns on the draft of EU-Data Protection Regulation has been driven by geneticists and ethicists involved in bio-bank research on CCD, the first being critical toward possible research limitations and the second being divided in pro and con fractions toward the use of huge databases of personal data (22,23). In this framework, bio-bank research appears as a main road for research breakthrough while personalized medicine is regarded as the new area for diagnostics and treatments. However, focusing this latter on group specific characteristics, doubts have raised on their effectiveness in representing populations since the chosen categories could be reproductions of es-tablished social categories instead, thus not representing the medical relevance required (19).
If the basic research developed on the HeLa cells may not be controversial in itself and thus does not create public controversy, when these cells also regard bio-bank issues, conversely, some quite problematic questions may rise, among them, variety and diversity. HeLa-cells, in fact, would represent the value of the one, while epistemological assumptions in epidemiological research on bio-bank collections would represent the value of the many. Accordingly, doubts can be raised whether fundamental biology research based on HeLa cells would represent and produce additional understanding of human constitution, health, and well-being considering epidemiological research and fundamental biology research.
---
TowArD A New PArADiGM oF ScieNce
In the field of medical biotechnology, like in science overall, research and society are intimately related ( 24) and changes of one reflect the other. Policy is a relevant and often controversial outcome of this interaction, as the bio-objectification process points out (25). In the case of bio-bank research, an important context is the new EU draft concerning data protection regulation (26), as well as the establishment of the new European bio-bank network (Biobanking and Biomolecular Resource Research Infrastructure, BBMRI) (27), which concern the use of new qualitative and quantitative materials and methods, which create new challenges for researchers, clinicians, and regulators. In this context, the draft of EU data protection regulation is object of laud public debates and disputes.
Our social economic setting is organized in private and public spheres, in communal and individual ownership. The industry and ethical guidelines have so far focused on informed consent, anonymity, benefit share, property, and ownership. Regarding bio-economy in Europe, research and commercialization have been split in public and private sectors, where information about the individuals has been restricted to the public institutions. Now, the need of co-operation between medical research and pharmacy, as well as the use of personal data create new challenges regarding privacy, autonomy, and governance. Thus, a new challenge is emerging -both from society and research -concerning the most suitable models for organizing and governing the advancement of biotechnology research while on the same time protecting the citizens.
The EU data protection regulation draft aims at building up a joint platform for the EU with common rules as well as common interpretations of established rules (being this latter point a constraint up to now) and centralized control functions (26). This document pushes forward the relevance of informed consent as its key ethical concept. At the same time, it aims at facilitating unity and cooperation within the EU where materials are meant to be shared, overcoming the present national obstacles concerning laws and regulation. In addition, BBMRI (27) has already grown in to a 54-member consortium with more than 225 associated organizations (largely bio-banks) from over 30 countries, making it one of the largest research infrastructure projects in Europe. As already pointed out (28), "The BBMRI has proposed the concept of expert centers, in which pharmaceutical research would be conducted outside the industry setting, donor material would not move outside bio-bank infrastructure, and industry would not have exclusive rights to data generation. " With this platform and the draft of EU-Data Protection Directive, EU aims at strengthening the protection of the individual (informed consent) and at bringing together formerly separated public and private institutions and sectors, ie, it is starting a change regarding governance and ownership of knowledge and property. This approach is expected to make real that new vision of the role of biotechnology and bio-industry in EU and in individual nation states (29).
We believe that these changes, ie, a new perspective of knowledge production and ownership, can also be regarded as the starting point of a shift toward a new of paradigm for science where life sciences, social sciences, and humanities will jointly contribute to the human well-being.
In conclusion, in this article we have shed lights on the relation between science, society, and policy from the angle of the bio-objectification process of the HeLa cells. The high-quality of these cells is the result of an odd combination of their unusual features that make them a model cell for biology and medicine, with the legal, technological, and socio-economic contexts in which they have been produced, distributed, and used. As a result, they generate questions about representation, significance, and value of the single and exceptional, and the power to modify and change they can generate. Thus, as expressed with the hermeneutics terminology, they appear as a phenomenon process where new meanings and bio-realities are created. These cells disrupted and undermined the being of Henrietta Lacks, while the life of HeLa cells is opposed to the being of Henrietta Lacks. So, the HeLa cells and not Henrietta Lacks have come to represent life. Seen from the angle of present discourses on bio-bank research and data protection regulation, the bio-objectification process of the HeLa cells seems to be part of a further societal and political discourse on variety, individuality, and property. Belonging to one human being, HeLa cells have represented humanity, and emphasized the importance of individual as a core concept of the personalized medicine. |
Our "Beacons of Organizational Sociology" series makes available, through first-time translations, texts that have shaped debates in organizational sociology in non-English-speaking countries, or presents reflections on such debates by established scholars. The first text in this series is a shortened English translation of the German article "Organisation als reflexive Strukturation" by Günther Ortmann, Jörg Sydow, and Arnold Windeler, published in 1997 in the highly influential book "Theorien der Organisation. Die Rückkehr der Gesellschaft" [Theories of Organization. The Return of Society]. The article applies Giddens' social theory to organizational research. In elaborating on "the principle of reflexive organization," the text provides a social-theoretically informed concept of organization that is of continuing relevance for organization research today. The publication can be classified as one of the decisive writings by the authors contributing to establishing organizational research based on structuration theory in the German-speaking world, informing many studies, e.g., on new organizational forms, innovation, and inter-organizational relations. The concise overview of the various existing studies in organization research using a structuration perspective at that time in the original manuscript is not part of this translation. |
When we say ‛organisation,' we are operating with a fundamental ambiguity. It can mean either the process of organising or the result of that process: the ‛organisedness' of social action and subsequently a system of organised action. Trying to eliminate this ambiguity by regulating the language we use would not only be futile because it is far too deeply embedded in language-it would also be unwise. A better suggestion would be to grant language a presumption of wisdom and ask why it has so stubbornly preserved this ambiguity-and not only in German. Following this path, one soon hits upon that recursiveness of human action which lies in the fact that our action results in the production of precisely those structures that subsequently enable and restrict our further action. Giddens (1976Giddens ( , 1979Giddens ( , 1984) ) wanted to preserve this double meaning of producing and product explicitly under the heading of ‛structuration', and his conception of the duality of structure dissolves the commonplace dualism of action and structure, their mere opposition, in the circular figure of recursiveness. Structures are the medium and the outcome of action. Even in organisations, they are initially only a ‛concomitant' result-in the sense that they are an unintended and unreflected side effect of action. Often enough, we create structures without wanting to and without attending to them. But when the flash of reflection illuminates structuration as producing and product, when we pause and begin to ask questions (What is actually repeating? There does seem to be a pattern: What sort of pattern is it? How can we fix it? Isn't there another way?) and to practise structuration in a reflexive way, then structuration becomes-in nuce-organisation. Organisation is structuration that has lost its naivety, its primordiality, its innocence: reflexive structuration.
This reflexive structuration finds its most pointed expression in the formality of modern organisation, in formal constitutions and procedures, which are of great importance in the coordination of action. The organisers, not least, hope that this will collectively secure and increase individual reflexivity and rationality. Whether this actually happens, and who benefits and who is burdened, is a subordinate question. Markets can also be the object of such organisational efforts-resulting, for example, in strategic business networks. Finally, even those more extensive modes of interorganisational coordination of action that Hollingsworth calls governances are partly the result of reflexive structuration (cf. Braczyk 1997, who himself speaks of discursive coordination). However, the feature of formal constitution-such as rules of formal membership-is much more weakly developed in inter-organisational networks than in organisations.
The concept of organisation is ambiguous also because today it concerns a state of affairs that existed in the most diverse historical epochs and societies: When Ptahhotep, vizier of King Isesi, recorded the "best practices" of pyramid building on papyrus scrolls around 2700 BCE (Kieser 1993b, 63), this was the result of what we would today call reflection on the structuration of pyramid building, which resulted in the formulation of rules and regular practices. But organisation per se, sans phrase, is a modern concept that has only emerged in the course of what has been called ‛modernisation' in sociology-the detachment of social practice from religion and tradition by means of rationalisation. 1 Only with this development does talk of ‛organisations as institutions'-usually referring to social systems called organisations-gain its historical substrate; and somehow these organisations have a conspicuous part in the genesis of capitalism (and vice versa, provocatively Türk 1995). The bearers of this reflection, however, are no longer just single people-subjects, personal actors, individuals-but, recursively enough, organised systems of science, administration and economy, to name but a few, or corporative actors in whose structures-rules and resources-the cumulative reflexive knowledge of modernity is stored; perhaps one could even say ‛inscribed'. In this latter case, we can also follow Ritsert (1981) in speaking of system reflexivity. While subject reflexivity refers to an individual's self-reference in thinking and acting, without which reflexive structuration is unthinkable, system reflexivity in our context refers to a supra-individual, namely, organisational reflexivity: a movement back into itself beyond individuals and individual thinking and acting, in the course of which organisational knowledge is produced and inserted into new recursive loops of organisational action. 21 Foundations of Structuration Theory:
Organisation, Reflexivity and Recursiveness
In this view, organisations, conceived as systems of organised action or practices, are reproduced as a result of the-more or less purposive-actions of competent actors or "knowledgeable agents" (Giddens 1984). Such agents refer in their interactions to structures, to sets of rules and resources and to other structural features of their field of action, properties that are appended to the field of action by this structured action-rigid departmental boundaries, for example, or a rigid division of labour, a high failure rate in conventional mass production, discrimination against employees, asymmetrical income distribution, to name but a few. In drawing on these, actors thus reproduce these structures and structural properties-and entire social systems such as firms and inter-firm networks. Indeed, they often do this intentionally, though they are neither completely aware of nor able to fully control the consequences of their actions.
Organisations are characterised by organisational practices, by recurring forms of action practiced in organisations, and not merely by formal structures, structural properties or input-output relations or by communication or decision-making alone. Organisational structures only exist at all in the actions and practices of actors-and subsequently in their memories and expectations, in the form of a virtual order. In our view, organisations are those social systems within which action is directed and coordinated by means of reflection, specifically by means of reflection on their structuration. The formulation and establishment of rules and the provision of resources takes place in a reflected way, which is to say that when it comes to organisations, structuration is the result-although an only partially intended one-of reflection striving for expediency.
For Giddens (1984, 5), actors always act reflexively. This is to say that in their actions they refer more or less deliberately to their own past, present and anticipated future behaviour as well as to that of others and to the structures of the field of action. Yet we speak of organising only when this reflexivity pertains to the shaping of these structures; and we speak of organisations in the-modern-sense of organised social systems only when formality-formal constitution and regulation-is present as differentia specifica.
Following Giddens' actor model (1984, 5), even the level of individual action involves the interplay of three layers of action that must always be considered: individual3 and organisational forms of "reflexive monitoring" ("Does my action connect well with that of others? Did it work out that way? I have to do it differently next time! What now? What will the others do? Watch out, there's someone coming from the right! Do I have to greet the person?"); the "rationalization" of action (when actors develop a commonsensical theoretical understanding of the reasons for their actions: ‛I did that (this way) because … '); the conscious or unconscious "motivation" of action by a desire for wish fulfilment or fear avoidance. (However, for Giddens a major part of human action is not directly motivated. Much of what we do stems from "routine," "habit").
It is already evident here how familiar discourses in organisational research can dovetail: those on steering and control, those on organisational ideologies and those on motivation (cf. on such discourses, e.g. Staehle 1994). For Giddens, however, actors never fully control the processes of social reproduction. Much is closed off to them, such that in many respects they act as competent actors on the basis of merely ‛practical', implicit knowledge. They know how it is done, or perhaps better: they know how to do it, they are good at it, without being able to explain exactly how and why they (have to) do it. Furthermore, they act on unrecognised premises and thereby produce unintended consequences. The results especially of collective-for example, organisational-action often turn out differently than intended.
In doing what they do, competent actors refer recursively to structures, perpetuating them through this very action-even if they are not always left unchanged in the process. This is exactly what recursiveness means: the iterative application of an operation/transformation to its own result-in this case, the operation ‛structuring' is applied to the result ‛structure'. In other words, recursiveness means that the output of an operation/transformation is reapplied as a new input to this same operation/transformation, which is precisely what happens to the structure reproduced in and by action: It is the (concomitant) result of action and enters into further action as its ‛medium'. 4 Structures therefore enable reflexively acting actors to act competently in interaction situations even as they constrain actors' possibilities of action. Given the pervasiveness of views that focus one-sidedly on their restrictive character, we would like to emphasise this restricting and enabling aspect of social-as well as organisational-structures. We can go even further: Enabling is based on restriction. (Time-coordinated action, for example, is based on restrictions on action, such as those imposed on us by rules of punctuality, schedules etc.). Modern organisations are a special case only insofar as these restricting and enabling structures-rules and resources-are established reflexively and their fixation is attempted by means of formalisation. Formality here means first of all formal determination. The obligations, expectations, rights and resources specified in the formal structure refer-and this is another, the primary meaning of formality-not to concrete contents and situations, but to generalisable ‛cases'; not to concrete persons, but to positions (‛jobs'), departments, areas of expertise etc. and finally to the corporate entity itself (as a legal person, for example), in this sense establishing formal relations between positions/organisational units/ organisations, but not concrete relations between persons. The fact that the latter relations are (supposed to be) formalised and formed in this way, however, points to the power dimension of formal organisation and to the way power can be expanded by its detachment from concrete persons (Coleman 1990). Indicated in the formal structure are also modes of attribution that permit one to punctuate the stream of action; to make out of the stream of practical intervention in the world delimited acts, which is to say actions; to break them down into 'responsibilities' of this or that department, into causes and effects, costs and benefits etc.; and to constitute the identity and boundaries of an organisation. 5 Giddens distinguishes between three dimensions of the social that are initially only separable analytically: At the level of social structures, these dimensions are called signification, legitimation and domination. When referring to the corresponding action or interaction, they are called communication, sanction and power (Figure 1). As they interact, actors mediate the level of action with the level of structure by making the rules and resources under situational conditions into modalities of their action in ways specific to the situation and in accordance with their biographies and competences, which is to say in highly particular ways.
When members of organisations communicate with each other, they refer reflexively and recursively to structural forms-rules in the sense of generalisable procedures-of signification, which they in this-always situational, particular-way make into modalities of their action. They exercise power in interactions by referring to organisational resources that they bring into the interaction sequence as means of power (facilities). They sanction by attributing their actions to norms and evaluating and judging the actions of others on the basis of norms that they derive from reflexive recourse to the ways and means of legitimation; in organisations, for example, they might refer to the practices of evaluation used for persons, (Giddens 1984, 29).
5 Stolz and Türk (1992) emphasise that in this way, incisions are made in the world, social interrelationships are severed, blanked out, "desymbolised". performance, processes, buying and selling behaviours and so on. And in doing all this, the actors (re-)produce the organisational structures: the structure of signification, legitimation and domination-which is to say, the existing rules and resources.
Let us summarise: Following Giddens, the structures of social systems can be broken down analytically into two types of rules and two types of resources: 1. Rules for the constitution of meaning (signification) establish what can be called the cognitive order of a social system or, in our case, of an organisation. For Giddens, this includes all those aspects related to the interpretation of the world as the foundation for action. In organisations, this refers, for example, to interpretative schemes, symbols, myths and so on. Even the sensual-aesthetic aspects of organisations, for instance, their architecture or, less concretely, the attractiveness of actions and objects of action, form part of this cognitive order (Ortmann et al. 1990, 31-35). 2. Rules for sanctioning social action (legitimation) make up the normative order of an organisation. From examining the rise and fall of organisational culture as a possible object of reflexive structuration-that is, organisational design-it is apparent that feasible cognitive and normative orders are only the tip of the iceberg, also in organisations. 3. Allocative resources enable actors to control material aspects of social situations such as the disposition of factors of production, goods produced or money. 4. Authoritative resources, by contrast, permit the exercise of power over persons, for example, by determining workflows, schedules and pay.
We view Giddens' structuration theory, which we have sketched above in outlining a few of its basic lines of reasoning, as a kind of socio-theoretical framework-some even speak of a meta-theory-for social science research. This framework needs to be supplemented by building blocks from theory of society and, in our case, organisational theory. To this we will now turn. 6
6 In our view, however, a full-fledged theory of society, let alone a theory of modern society, does not currently exist, nor does Giddens offer one. We are therefore very cautious in this respect and limit ourselves to sparse references concerning the acquisitive principle in capitalist economies (Section 2. 1) and the enormous expansion of the technical, organisational and economic possibilities of spacetime binding as a characteristic of modernity (Section 2.4). In dealing with organisations as phenomena of modernity, however, organisational theory itself contributes to a theory of modern society, which we are thus helping to construct to a small extent, though we are unable to indicate the exact place or significance of organisational theory within it. (We cannot wait for a complete theory of modernity only to then ‛incorporate' organisational theory into it; rather we must proceed recursively this time as well, following a-hopefully-creative circle from theory of society to organisational theory and back and running through it again and again).
---
Organisation as Reflexive Structuration
2 Organisation as a Reflexive Form of Structuration
Hardly anyone would deny the enormous relevance of organisations for modern society (as, for example, the admittedly somewhat outmoded talk of the organisational society indicates). And-aside from the most prominent exceptions of Luhmann and Coleman-hardly any author of ‛grand theory' gave the modern organisation the place it thus deserves in his theoretical architecture: not Parsons,7 not Habermas, not Bourdieu, not even Giddens.
Nonetheless, a growing number of authors and publications have been attempting to claim structuration theory for organisational research. From the point of view of Giddens' theory, this should come as no surprise, since the three concepts whose centrality we have underscored here-reflexivity, structuration and recursiveness-easily and plausibly converge in the concept of organisation if organisation is defined as reflexive structuration in precisely that double sense of recursive production ("organising") of a product ("organisedness", organisation as a social system) discussed above. In any instance where reflection sheds light on structures and structuration and enters into the practice of structuring as well as into its results, we are dealing with organisations. Reflexivity is institutionalised in organisations-reflection, namely, on the structuration of collective action-which is not to say organisations are a paragon of rationality. The labour, management and organisational sciences of the twentieth century, especially business administration, are already forms of the reflection of reflection, even if they are also forms of reflection that were halted early on insofar as they long remained restricted to the search for universal or situational one best ways. As the twentieth century comes to a close, however, awareness of contingency has once again sharpened considerably and taken hold of the Taylorist paradigm of mass production and, beyond that, of one-best-way thinking in general.
However, applying structuration theory to the ends of organisational research also suggested itself from the point of view of organisational theory. First, organisational theory was in urgent need of a foundation in social theory and a theory of society, an urgency that is only magnified in view of a development that can also be observed elsewhere-for example, in industrial sociology, technology studies and economics-towards an explosively increasing number of theoretical perspectives and paradigms that hardly communicate with each other, such that warning about the fragmentation and even "dissolution of organisational research" (Friedberg 1995, 96, our translation) is not entirely unjustified. Our impression is that the dimensions of the social proposed by Giddens-signification, legitimation and domination-are well-suited for carefully integrating these diverging theoretical perspectives: interpretative, culturalist and institutionalist approaches; approaches based on theories of power, domination or control; and economic approaches to organisational research. 8Second, the concept of structuration, with its notion of the duality and recursiveness of structure, allows for what we find to be a relaxed approach to controversies, which, as we know, are notorious also in organisational theory-and which we should, however, gradually leave behind. We have in mind especially the controversies around the question of ‛action versus structure' (or system), which in organisational theory has been answered often enough in favour of one side or the other-in favour of action or decision, for example, by Simon and the Carnegie Mellon school, and in favour of structure by the theories of structural contingency, for example, in which what actors do plays such an underestimated role. Yet Giddens' concept of structure-with its dual components, rules and resources-offers particularly favourable ways of approaching questions of organisational theory. While this may be self-evident for the concept of rules-as the relevance of organisational rules certainly seems obvious-it requires careful explication by way of precisely defining the concept of rule. It applies then also, and particularly so, to the concept of resources, which is indispensable in the framework of political and economic organisational analyses. One need only think of the resource-dependence approach of organisational research, the resource-based view of strategic management, or of micropolitical or strategic organisational analysis (which cannot get by without a concept of power resources), or generally of the view that the enterprise is an institution for the transformation of production factors into products.
Third, the concept of recursiveness suggests also considering the relationship of organisations to supraorganisational institutions in its light. This means taking account not only of the influence of these institutions on organisations but also of the reverse influence organisations have on the manifold institutional conditions of organisational action-a supplement to new institutionalism that we consider highly significant.
Fourth, a peculiarity of Giddens' concept of structure consists in its emphasis on space-time binding as a decisive effect, an aspect whose relevance is becoming strikingly clear today in view of, for example, just-in-time manufacturing in regional and global networks.
Fifth, the concept of organisational structures, which-except in traces of memory and expectation-exist only in action and are therefore always under the tension of action, permits an unconstrained understanding not only of organisational stability and inertia but also of organisational transformation; and it does so with a view both to the individual organisation and its intended change-the keyword here is 'reorganisation'-as well as to a possible unintended transformation of an organisation or of the ‛genre of organisation' or of particular types of organisations, including questions of their origin, their genesis-the keyword here is 'evolution'.
Sixth, Giddens' concept of action can connect up with a theoretical description of the relationship between organisation and psyche that allows us to reckon with those dramatis personae who create and change the organisations in the first place-and who are also, for their part, to some extent creatures of the organisation.
While these six areas of problems are certainly not the only ones, nor ordered here in a perfectly systematic manner, they are nevertheless highly significant desiderata of a socio-theoretical foundation for organisational theory. We now intend to discuss them in greater detail.
---
The Dimensions of the Social and the Role of the Economy
All organisational action, we emphasise once again, plays out in all three dimensions of the social at once: A certain organisational vocabulary is used repeatedly as a set of interpretative patterns and is reproduced ipso facto as an element of the cognitive order of an organisation. The ‛laws' of the formal organisation, the formal rules, evaluation procedures and leadership styles, but also the informal standards of what constitutes good work, for example, are applied, followed (or subverted) and thereby reproduced (or undermined) as the organisation's order of legitimacy. The organisation of labour implies a form of domination over (the labour of) human beings and is reproduced as an authoritative resource through repeated practice. Know-how and technology permit domination over nature and matter and are reproduced as allocative resources by recurrent application. In general, organisational action implies recourse to a set of organisational patterns of interpretation and norms, organisational rules and resources derived from an organisational structure, which in this way-by application of organisational rules and resources-is recursively reproduced and in some circumstances modified in the process.
At first glance, Figure 1 seems merely to provide an unsatisfying juxtaposition of these dimensions of the social. Rather, it begs the question-and this is in many ways the most exciting question of organisational theory: How are the dimensions of cognition, legitimation and domination related? We all know, to put it mildly, that they have a bearing on each other: Our norms depend on our understanding of the world, on our patterns of interpretation and vice versa; our patterns of interpretation, concepts and definitions of situations are established with power, and they are, conversely, powerful means of exercising power. And whatever is considered legitimate depends likewise on power relations, just as, conversely, norms function as instruments of power. In this ‛horizontal' direction, in the relationship of the dimensions of the social, we thus reckon with recursive relationships of constitution, and we indicate these relationships in Figure 2 by arrows, which set it apart from Figure 1 only in emphasising this recursive circularity once again. 9 How does the economy come into play in all of this? The traditional way of understanding economy as the management of and struggle over scarce resources would be too narrow for Giddens-scarcity is better understood, following Commons (1936, 243), as institutional production itself, institutional scarcity.
A general social theory-based definition of economy must be conceived more broadly, and in Giddens (1984, 34) this is the case. He regards the inherently constitutive role of allocative resources in the reproduction of "societal totalities", be they entire societies or organisations, as differentia specifica of the sphere of the 9 Cf. Ortmann (1995a, 368) for a similar account and detailed illustrations based on the example of lean production.
---
Organisation as Reflexive Structuration
economic. Indeed, only in modernity does this sphere experience that institutional differentiation which has become so self-evident to us. Modern economic institutions-money and credit, labour markets, product and financial markets, competition and enterprises all designate institutionalised practices that are historical to the highest degree-are marked by the dominance of the (re-)production of allocative resources. And for many organisations-enterprises, above all, of course-this might be a suitable characterisation. We can then speak of economic organisations. In the case of capitalist enterprises, practices of profitable (re-)production even dominate. We can take this as a starting point for a structurationist theory of the enterprise that gives economics its due without reducing the enterprise to pure economics. The acquisitive principle, as Gutenberg called it, is not an anthropological constant, not a psychologically but an "institutionally anchored regulative without which the system could not function" (Gutenberg 1973(Gutenberg [1955]], 9, our translation). However, this institutional dominance by no means signifies that the other dimensions of the social are insignificant in and for economic practices. Rather, even when it comes to the rules and resources in and of economic organisations, we are dealing with that recursiveness between the dimensions of the social indicated in Figure 2. Economic practices go hand in hand with the reproduction not only of allocative and authoritative resources but also of rules of signification and legitimation. This manifests particularly clearly in the maintenance of the cognitive and normative order, which is shaped by accounting and bookkeeping systems-an order that recursively ensures that (economic) practices are recognised and evaluated as economic, or else modified (rationalised) until they roughly conform to that order (cf. Sydow et al. 1995, 33-40). The extent to which the rules for the constitution of meaning depend on the allocative and authoritative resources of an organisation and vice versa can also be shown, for example, by the fact that concepts for the organisation of production such as Taylorist mass production or lean production never concern only the production technology ‛per se'-assembly lines, computers, automation, storage areas and technology, transport technology-but always the practical handling of it. And this immediately implies questions about the domination of human beings, questions about legitimation, about fairness in dealing with human beings, for example, and about signification. (What exactly is lean production? What does group work mean? What are the rights and duties of a machine operator? And so on).
"Rules cannot be conceptualized apart from resources, which refer to the modes whereby transformative relations are actually incorporated into the production and reproduction of social practices. Structural properties thus express forms of domination and power " (Giddens 1984, 18).
It has already been indicated above that we can further differentiate the dimension of domination into politics and economy along the lines of the distinction between authoritative and allocative resources, as long as we keep in mind that this does not mean economy in a pure sense-that is, purified of the other dimensions of the social-but refers instead to the more or less far-reaching institutional differentiation of two spheres, one of which is concerned primarily with the domination of human beings and the (re-)production of authoritative resources, and the other primarily with domination over nature and matter and the (re-)production of allocative resources. This distinction of practice is quite common also within organisations, as the-critically intended-talk of ‛political decisions' in enterprises indicates whenever considerations of power prevail over economic concerns. We can then inquire further about the recursive relationship between these two dimensions: the use of power resources (facilities in Figure 2) may increase efficiency and/or profit, just as, conversely, economic resources increase power over human beings. And we can ask about the relationship of each, now separated into politics and economics, to the interpretative schemes and norms by means of which we define the 'is and ought' in organisations. We will not go into these differentiations in any more detail here, but instead conclude by pointing out-using the example of law and politics-that organisational action stands in a recursive constitutional relationship not only to organisational but also to supraorganisational structures-to the institutional environment, as we can say in the language of new institutionalism. It is constrained and enabled by the institutional environment-think of the political and legal regulation of telecommunications or TV markets, labour protection and co-determination laws and so on-and it has an impact on these supra-organisational structures (in this example, political and legal structures). The latter does not always involve strategic intent, but often enough it does-namely to influence those restricting and enabling structures.
Organisations try to regulate "their" regulations, that is, the ones that affect them: regulation of regulation, recursive regulation. Ortmann and Zimmer (1998) call this "strategic institutionalisation".
---
Institutions and Institutionalisation
It could be shown that this way of wielding influence (for instance, via lobbying) involves the use of communicative, normative, political and economic means and that it targets all the above-mentioned dimensions of the social: seeking to change interpretations and interpretative schemes 10 and influence perceptions of legitimacy and norms 11 as well as political and economic conditions. The "return of society to organisational theory" then also means addressing the enormous influence of organisations, and especially of enterprises, on the institutional-but particularly the regulatory-constitution of society as a whole. Business administration has always had a certain interest in this topic. Today, the pertinent questions of how environmental protection laws, accounting rules, capital market laws, banking regulations, anti-trust legislation and so on affect (the efficiency of) enterprises are handled predominantly by property rights theory, transaction cost theories, agency theories, political economy and public choice theory, as one can now learn from the instructive anthology Regulation and Corporate Policy [Regulierung und Unternehmenspolitik] by Sadowski, Czap, and Wächter (1996). Usually, however, it is how certain regulations operate-especially their efficiency effects-that is of interest. In business administration, the question begins with a regulation that is somehow assumed or alleged to be given in order to trace its effects on enterprises and ‛the' economy. Influence in the opposite direction (from the enterprises to those regulations), as obvious as it is, is much more seldom discussed. 12 From the perspective of structuration theory, however, institutions and regulations must be analysed from the outset as a-restricting and enabling-medium and as a product of action, which is to say also as a product of strategic action that calculates and seeks to influence the effects of institutionalisation processes and regulations in light of well-understood interests.
Tracing the lines of the Giddensian dimensions of the social permits us to distinguish and consider institutional orders that pervade all of society-all of its symbolic, political, economic and legal institutions-in this recursive constitutional relationship to action and, now especially, to the action of organisations as corporate actors. This ties institutions and institutional orders more closely to action, more closely to strategic action and therefore more closely to economic and other interests and thus more closely to power, conflict and politics than is often the case in neoinstitutionalism. And if we follow Giddens (1984, 17) in defining institutions as those 11 To stay with the previous example: Should private television be permitted or banned? Should advertising on television be permitted, banned or subject to time restrictions? 12 Cf. in the above-mentioned anthology by Sadowski, Czap and Wächter, however, Wenger's (1996) revelatory polemical contribution (on the influence of powerful interests on capital market law), which draws on public choice theory and (on this point in particular) the theory of rent-seeking (Tullock 1967) and Walz's (1996) contribution to the same volume, which is somewhat similar but more moderately argued. On the pertinent theoretical foundations, see the introduction by Sadowski, ibid. Cf. also the impressive work of Hutter (1989), who combines transaction cost theory and systems theory into a "self-referential theory of the economy" and of the "production of law".
Richard Nelson (1997) discusses (in section V.C) pathways of such recursive regulation that can sometimes lead entire industries into a lock-in of their status quo. All these theoretical approaches can enrich and complement each other. societally imposed regular practices that have the greatest distanciation in time and space within societal totalities (for details, see Ortmann, Sydow, and Türk 1997, 25-33), then perhaps this is also a sufficient response to concerns voiced, for example, by Türk (1997) and Nelson (1997): that the concept of institution is too vague to be truly fertile.
Societally imposed practices-this definition, however, requires further clarification of the concept of rule it contains.
---
Rules, Resources and Modalities
Giddens defines rules-those of the constitution of meaning as well as those of legitimation-quite simply as generalisable procedures of practice. As "generalisable procedures applied in the enactment/reproduction of social practices" (Giddens 1984, 21), they are inherent in the actions of the actors (and subsequently in their memory)-and nowhere else. Verbally formulated rules, such as those found in legal codes or organisational instructions, job descriptions etc., that is, in the 'blueprints' of formal organisations, are not rules in this sense, but "codified interpretations of rules" (Giddens 1984, 21). Games in organisations, to which we will return below (see Section 2.5), are based largely on (game!) rules in the sense of practised procedures that are not codified.
What we get from all of this are some not very obvious, yet very important distinctions for which the usual terminology of organisational theory often has no equivalent: 1. a regular praxis, made up of organisational practices, in which 2. rules, understood as generalisable procedures, are inherent; these 3. can be formulated as "codified interpretations of rules" (written laws, formal organisational ‛rules'), and finally 4. beyond rules (and resources, see below), the additional structural properties of social systems that are produced, reproduced and altered by that regular praxis, but are themselves neither rules nor resources: for example, the division of labour, hierarchy, the spatiotemporal interrelatedness of interactions, or centralisation-in short, everything that is usually called structure in organisational theory but is not structure in the narrower sense of rules and resources that we have given it here.
The strange thing about rules of all kinds is that they cannot themselves govern the way they are applied and therefore, strictly speaking, it is only in this situational application that their meaning is fully decided-also and especially their meaning for what happens in organisations. This is of particular significance when it comes to formal-explicitly formulated-‛rules' (better: codifications of rules), which traditionally play such a prominent role in organisational theory. Application, which only appears to be secondary because it is derived from the formal ‛rule', is in reality
Organisation as Reflexive Structuration eminently constitutive for the meaning of this ‛rule', and the concept of informal organisation points to how this supposedly so marginal application is of the utmost importance for the functioning of any organisation, even and especially when that application consists in a deviation, an undermining-indeed, even when it consists in the violation of the formal set of ‛rules'. 13 As Alfred Schutz (1967Schutz ( [1932]]) has said, in this set of ‛rules' is a certain emptiness that needs to be (ful-)filled in and by application. This is as true for rules of constituting meaning as it is for rules of legitimation, which can only be (ful-)filled, supplemented/replaced in situational, contextual circumstances, equipped with the indices of the here and now-even as they are stripped, in concrete action, of their typicality.
Giddens' term modalities refers to the rules and resources that are (ful-)filled in this way, deployed hic et nunc by someone with a specific biography and competency. According to this interpretation,14 they designate the place of mediation between action and structure (between subject and object), and therefore Giddens' reception of Schutz is, on this reading, of some significance for the notoriously controversial question of whether Giddens succeeded in this mediation. 15 If we have continuously cited the example of the constitution of meaning by use of interpretative schemes and the concomitant (re-)production of a cognitive order as its corresponding structural dimension, all this applies equally to the constitution of legitimacy by use of norms and the concomitant (re-)production of an order of legitimation-and to every practical intervention in the world by use of facilities (in the broadest sense) and the concomitant (re-)production of an order of domination. The latter proceeds with recourse to resources made available by an existing order of domination, and every disposition under situational circumstances implies an analogous movement from an emptiness-the somehow still empty generality of resources as means to typical but not concretely defined ends-to the fullness of the now, here and in this way, which the user only imparts to it in praxis. 16 As explained above, we consider it a great advantage that Giddens' concept of structure in this way includes specific resources, especially with a view to organisations and especially enterprises. Note that this concept of resources avoids the opposition between the material and the immaterial, even if it sometimes seems otherwise:
"Some forms of allocative resources (such as raw materials, land, etc.) might seem to have a 'real existence' in a way which I have claimed that structural properties as a whole do not. But their 'materiality' does not affect the fact that such phenomena become resources, in the manner in which I apply that term here, only when incorporated within processes of structuration" (Giddens 1984, 33).
That we first must make resources into resources, that we have to generate them in recursive loops of organisational praxis as-socially significant-resources before we can use them as such; this is an insight that is thoroughly incorporated in the resourcedependence approach of organisational research as well as in the resource-based view of strategic management, where, as one can see, it has considerable consequences for the praxis of organisation and management (on this, see Knyphausen-Aufseß 1997).
---
The Binding of Time and Space
In Giddens' terminology, structures-that is, rules and resources-‛bind' time and space. 17 ‛Instantiated' in situated practices, they provide the latter with temporospatial extension, institutionalisation and a globalisation that can be supplemented, replaced, offset or corrected by certain forms of localisation and regionalisation. Technologies of storage, irrigation, conservation, transport and, as of recently, especially data storage and processing as well as communication provide the enormous possibilities of timespace extension that we are confronted with today. Organisations are the medium and result of the development of precisely such technologies: They enable or promote 16 Schutz has also shown this, perhaps surprisingly, for the case of the ‛tool': a product that is used in further recursive loops of human praxis for the purpose of producing (cf. Schutz 1967Schutz [1932]], 201). On the recursivity of ends and means thus created, cf. Ortmann (1995a, 84, fn. 3, 112-118). It is not simply that (‛purposeful') means are derived from ends and measured against them; rather, ends are also seen anew, rediscovered, re-posited in the light of new means. This recursivity undermines-deconstructs-any affirmation of economy that has to operate with fixed needs, orders of preference and ends, as Ortmann (1995a, 98-124) has attempted to show on the basis of the relationship between ‛recursivity, productivity and viability'. 17 On time binding, cf. Luhmann (1995Luhmann ( [1984], 125, 221-224)], 125, 221-224), who with reference to Korzybski (1949) likewise traces time binding to structure formation.
---
Organisation as Reflexive Structuration
technology development, which in turn massively accelerates the development, proliferation and power of organisations in society. With ‛storage', Giddens-with a view to modernity-has in mind also and above all the storage of authoritative resources in memory, in the form of writing and, today, by means of computer technology; but he also has in mind the form of organisation, which considerably increases the possibilities for storage. The origin of this special nuance (Giddens 1979, 198-233) in a tradition which, from Husserl via Schutz and Heidegger to Derrida, 18 has made the temporality and spatiality of human existence a prominent object of reflection, may have impeded-and may continue to impede-its reception in Germany. However, this will change in times when dichotomies such as global-local indicate spatially the distance and the rending tension contained therein. The small dairy cooperative in East Frisia whose dried milk is enjoyed (months or years after production?) by children in southern Africa is a standard example of what Giddens (1990, 64-65) calls "time-space distanciation"-the globalisation of modernity, a globalisation that initially occurs naturally and only later becomes an object of reflection, which is also to say an object of reflection on the structuration of global enterprises and enterprise networks.
The fact that organisation is always about spatiotemporal organisation-about production schedules, time worked and time lost, operating hours, just-in-time production, night work and overtime, cycle times, target times, set-up times, break times; and about production spaces, reworking and storage spaces, transport routes and communication channels, outsourcing, regional and global networks-requires no explanation. Modernity has given rise to enterprises that, on the one hand, have nearly disappeared from view-that are ‛hollow,' ‛virtual'-and, on the other hand, operate worldwide; enterprises that are detached-‛disembedded'-from their local contexts and 're-embedded' only via organisation, information technology and communication technology.
The close attention Giddens' social theory pays to "time, space and regionalisation", his concept of structure and structuration especially tailored to this and his keen eye for the fact that the concept of space-time distanciation is directly related to the theory of power (Giddens 1984, 258)-in other words, that space-time extension by way of organisation is of fundamental significance for the extension of power: this too contributes to the attraction his theory holds for research on organisational theory.
18 Especially Giddens' definition of structure as an "intersection of presence and absence", a "virtual order" involved in the reproduction of situated practices in time and space, originates in Heidegger and Derrida. Rules and resources are in this sense outside of time and space except in their "instantiations" in action and in their "coordination as memory traces" (Giddens 1984, 16-17). In this respect, they are characterised by an "absence of the subject"-just as "language" is without a subject. It is in action, however, that they have their actual existence-they receive it by means of that (ful-) filling, supplementation and replacement discussed above with reference to Schutz. On Giddens' reception of Derrida, cf. Ortmann (1996).
---
Organisational Change
Organisational change may be more or less intentional or unintentional. In the former case, we will speak of reorganisation, in the latter of evolution. 'Evolution' may refer to individual organisations, 19 but above all to the genres, here populations of organisations. Of course, they are interrelated: Evolution also takes place through reorganisation, the factual consequences of which are, incidentally, never entirely intended. But evolution proceeds also by way of change that is unplanned from the outset, as well as by selection. Evolution, as we use the term, does not imply any sort of advancement, however defined.
In the case of reorganisation, the intentionality of change means that it is intended but not that it is realised as intended. In opposition to rationalist textbook versions of reorganisation, we have borrowed Levi-Strauss' image of bricolage, tinkering: a productive action that works on an unfinished task with a limited supply of means-a tinker box (Ortmann et al. 1990, 391-395). Bounded rationality, goals that change depending on opportunities-think of the opportunities created by new technologies-and tool-like means for tasks of a type only partly defined by ends: these are the most important features of tinkering as well as of reorganisation, which makes changes to the structure of organisations, to their rules and resources, in a political process that ultimately outstrips the metaphor of the lonely tinkerer. Reorganisation is the conscious, reflexive re-structuration of ‛organisation' as a field of action, a re-structuration that aims to change an organisation's rules and resources while playing out in every dimension of the social as an attempt to change established structures of signification, legitimation and domination. This is subject, like all organisational action, to the recursiveness of structure. Reorganisation-as well as resistance to change-must therefore make use of the very means of power made available by the (still) given organisational structure. Disputes about restructurings are often so fierce because they regulate how power will be distributed in the future rounds of organisational games. We therefore follow Giddens as well as Crozier andFriedberg (1980 [1977]) in interpreting resistance to reorganisations not as an expression of the irrationality, stupidity and inertia of human nature but, on the contrary, as an organisationally induced phenomenon: the usually thoroughly rational behaviour 20 of players in an established game of routine who have become comfortable and proven themselves within its structures, its game rules and distribution of resources. And now, in the face of a game of innovation that impacts on and may destroy the old game structures, and should at the very least change them, they react by deferring, putting on the brakes or resisting-not seldom, by the way, while citing reasons that are good even from an organisational point of view. This view of things deprives reorganisation processes of much of that well-ordered rationality that textbooks often attest to them, and which is frequently expressed not only in unaffected ends-means hierarchies and correspondingly ‛rational' step sequences and phase schemes but also in the more or less unshattered belief that the results of reorganisation processes are also properly understood as the results of intentional action.
Even after abandoning such a picture of rational and controlled reorganisation, the rationality that is threatened in this way can rescue itself by means of everrationalist social Darwinist conceptions of evolution. Then it is not rational reorganisers but rather environment, selection and/or adaptation that ensure the survival of rational, perhaps even optimal, forms of organisation. We can here leave aside the critique of these ideas, as handed down from Veblen via the population ecology approach to some variants of new institutionalism, because this has been done conclusively elsewhere, for example, by Kieser (1993a) and by Nelson (1997). We would just like to point out that Giddens (1984) strongly rejects this kind of evolutionism in the social sciences as well as any theory of social transformation understood in this way. His primary reasons are twofold: (1) People do not make their history just as they please but in knowledge of this very history, as reflexive beings, and they change this history depending on their knowledge. The same, by the way, could be said of organisations (cf. Kieser 1993a, 255-256). ( 2) Neither ‛societies' nor ‛organisations' (again Kieser 1993a, 257) are fit to serve as those clearly definable basic evolutionary units that are independent of the course of history itself, but whose very evolution should be the issue at stake here. Representations of organisational change also require a completely different form and, in short, must operate with concepts such as episode, coincidence and critical threshold of change; they must reckon with contingency, necessity and chance (cf. also Giddens 1984, 244-262), which can force change into certain courses and trajectories. Path dependency ("organizational tracks", Greenwood and Hinings 1988) is an important concept in this context because it allows us to grasp fairly well the peculiar mixture of chance (‛small events' in the beginning) and necessity (‛lock-in' in the further course) that brings about the ‛evolution' of organisation in specific directions. 21 It is not the heroes of a universal principle of efficiency-‛survival of the 21 Cf. Arthur (1989Arthur ( , 1990)), David (1986David ( , 1990)), North (1990, 76, 92-104), Ortmann (1995a, 151-174). Path dependence means that the direction of processes depends on their course and on ‛small events', with each step being determined by the one before, so to speak, and not from the beginning-for example, by the compass bearing ‛efficiency'. Organisational solutions do not necessarily gain acceptance because they are efficient (but instead because of contingent circumstances and small events), but they can be (made) efficient because they have gained acceptance. Consistent with this view, cf. the non-economistic portrayals of the triumph of computer technology and the genesis of systemic rationalisation and lean production in Ortmann (1995a, 172-173, 210-211, 408-409); for a general discussion of the relationship between contingency, chance and necessity in connection with the development of forms of production with reference to Stephen Jay Gould (1989), cf. Ortmann (1995a, 9-25).
fittest'-who are victorious according to this account but lucky winners who could just as easily have become losers, but who now, having won, have at their disposal the means to build on their victory: to oust the others from the market permanently, to gradually ensure their own efficiency and, last but not least, to rewrite the criteria for success and history itself in such a way that their victory appears as a heroic act of efficiency: Winner takes all. It is not by chance, then, that path dependency runs like a golden thread through Nelson's (1997) contribution.
Finally, structuration means structuredness and structuring. In principle, stability and change are on equal footing here. This is perhaps the greatest advantage of which a Giddens-inspired organisational theory can boast: that it allows us to think both the sometimes so rapid changes and the sometimes sheer despairing inertia of organisations-as well as the complication that change without stability (e.g. valid interpretative schemes and guaranteed access to resources) is not even possible (and vice versa). For both, of course, structuration theory provides only a theoretical framework within which the rigidity and conservatism of organisational structures-or indeed their changeability-become theoretically workable. It does not provide this theorisation for particular empirical cases-it cannot and does not intend to at its level of generality. Yet it renders stability, inertia, encrustation, blockage and immobility issues that can be addressed theoretically because they are deciphered as results of recursive reproduction-as results of constant movement.
---
Organisation and Psyche
At least since Barnard (1938) it has been clear that organisations consist of actions, not persons; this is a consensus that would not deny that persons are ‛important for organisations' but would like to insist (a) that no organisation subsumes the whole person with all his or her activities, (b) that it is precisely their-organisational-activities that are organised and in turn (re-)produce the organisations and not, for example, the character traits of persons, their hopes, doubts, aversion to garlic, secret thoughts or vices and virtues, and (c) that all organisational activities are understood as elements of organisations (and not just productive labour, to cite an example taken from older versions of both Marxism and business administration, cf. Witt 1997).
But how then are we to grasp theoretically that-and how-persons are ‛important for organisations'-and, nota bene, vice versa? Psyche, motives, performance readiness, anxiety, likes and dislikes-that is, the needs 22 of the acting persons, to use
22 McCloskey (1990, 97-100) does not hesitate to speak of desire. "The economy depends today on the promises made yesterday in view of the expectations about tomorrow. […] A correct economics […] is historical and philosophical, a virtual psychoanalysis of the economy, adjusting our desires to the Organisation as Reflexive Structuration the concept under which economic theory subsumes all this without dealing with it-how can they feature in our social-and organisational-theoretical outline?
The fact that it does not suffice, for the more precise definitions that are required here, to imagine adding in psychoanalysis as the competent discipline, as it were-after all, psychoanalysis portrays "the individual in his or her unmistakable identity, in his or her life-situational and life-historical individuality" (Lorenzer 1976, 19, our translation)-is due to the circumstance that the object of psychoanalysis is not real interactions and object relations but the interpersonal relations that are embedded in personality or, to put it more drastically, "the inner world of fantasy scenes" (Lorenzer 1976, 24-25, our translation). Psychoanalysis is concerned with schemes of events-as-sensed (‛Erlebnisentwürfen'), not the investigation of events-assuch. 23 Interactions, however, as an element of the social, are events. Psychoanalysis cannot immediately connect up with this. Hope for mediation is nourished only by the insight "that the figures of experience derive genetically from real interaction, the foundational interaction forms are always the inner precipitation of interactions and, furthermore, the forms of interaction are functionally related to real interaction as drafts of behaviour and action" (Lorenzer 1976, 25, our translation).
"Interpersonal relations in personality" is thus supposed to mean:
"From the collective social structures, we get past the mediating figures of the interaction game and into the individual in the following way: The […] interplay in the mother-child dyad influences the child's organism, calibrates its behavioural reactions; that is, it is precipitated in the child's organism. […] Interaction is laid down as an interaction engram to then be instrumentalised as a design of behaviour. Themselves the result of interactions, these drafts of behaviour determine subsequent interactions. Genetically as well as functionally, they are related to interactions; they are specified by interactions in order to determine further interactions; but as regulated regulators, they are internal to the individual. reality principle" (McClosky 1990, 109). The fact that this term, desire, tends in specialist jargon to be immediately replaced by less dangerous ones-taste, need, preferences, utility functions-permits needs to be kept outside, exogenous, and subsequently dispensed with-how disconcerting in view of McCloskey's quoted definition. Structuration theory, as the following will show-they are almost verbatim excerpts from Ortmann (1995c, 252-259;cf. also 1995b)-suggests an endogenising of needs: the recursive production of needs through production and consumption. The fact that orthodoxy cannot admit this, or can do so only at its margins, does not detract at all from the unheard-of importance of this recursivity. Unheard-of is perhaps the wrong expression. Concerning this insight, Marx, Veblen and Keynes, for example, were not given a hearing-in fact, it went in one ear and out the other. 23 On this and on the role of psychoanalysis in social research, see also Leithäuser and Volmerg (1988, here, e.g., 36-37, 45, 55-60).
As internal regulators, they clearly do not belong to the observable level of interaction phenomena, rather they are building blocks of the essence of personality in its societally specified form. These drafts of behaviour embedded in the personality, which constitute the personality in its essence, have been called 'specified interaction forms'. 24 The specified forms of interaction are the societal relations in the concrete individual" (Lorenzer 1976, 20-21, emphases in original, our translation).
As we can see, they are the results of a recursive process of (re-)production: Emerging from interaction processes, they enter into new interactions in important ways and are fixed as interaction engrams in order to then be related, in the form of interaction designs, to action-which recursively stabilises that fixation. Lorenzer's ‛specified interaction forms' designate the level of mediation between action and personality structure. Thanks to Lorenzer's reformulation of psychoanalytical concepts in terms of forms of interaction-nota bene, this somewhat misleading choice of words refers to interaction designs or patterns in individuals, not to forms of real interaction-we can offer a sketch of the individual that can be meaningfully linked to social and organisational theory, a schematic version of which looks something like Figure 3. 25 "The interaction forms [interaction designs; the authors] can only appear in interactions, be they imagined or really occurring scenes" (Lorenzer 1976, 25, our translation). The psychoanalytic achievement, however, is still the reconstruction of events-as-pereceived ('Erlebnisrekonstruktionen'), not of events.
behavior, interaction specified interaction form (designs of specified interaction) interaction engram personality structure 24 Here Lorenzer quotes his own work "Symbol, Interaktion, Praxis" (1971). 25 Lorenzer (1976, 30) opts for a somewhat different, much more complex form of representation.
---
Organisation as Reflexive Structuration
To speak of interaction engrams and personality structure as we do here, and as Lorenzer does in his work, is also to imply something like a cognitive structure that is not, however, central to the framework of psychoanalysis. Desires and fears on the one hand and something like reason on the other-both sides are accounted for in Giddens' model of action, which distinguishes between three levels of consciousness in actors: discursive consciousness practical consciousness unconscious motives/cognition (Giddens 1984, 7).
It also considers both "reflexive monitoring" and "rationalisation" (in the sense of commonsensical justification) as well as the "motivation of action": "I distinguish the reflexive monitoring and rationalization of action from its motivation. If reasons refer to the grounds of action, motives refer to the wants which prompt it. However, motivation is not as directly bound up with the continuity of action as are its reflexive monitoring or rationalisation. Motivation refers to potential for action rather than to the mode in which action is chronically carried on by the agent. […] For the most part motives supply overall plans or programmes-'projects', in Schutz's term-within which a range of conduct is enacted. Much of our day-to-day conduct is not directly motivated" (Giddens 1984, 6).
With the help of Lorenzer, we can understand this more precisely, that is, in terms of recursiveness and structuration theory: as the relationship between an interaction engram (‛motives supply overall plans or programmes'), which refers only indirectly to interaction scenes, and specified interaction designs, which refer to them directly.
But we need psychological access not only to desires, fears and motives but also to cognition, including its unconscious parts.
For a combination of cognitive psychology and symbolic interactionism, which are linked to Jakob and Thure von Uexküll's recursively constructed models of the ‛function circle' and the ‛situation circle', we have taken hints from the instructive work of Brauner (1994). With its structuration-theoretical design-recursiveness of interaction and cognitive structure (cognitive maps, mental models)-Brauner's dynamic, circular model of interaction represents a cognitive-psychological elaboration of the basic idea we would like to present here. Connections to Anthony Giddens' concept of action are unmistakable, especially the proximity of "mental control of action" [mentale Handlungskontrolle] (Brauner 1994, 103-105) to Giddens' 'reflexive monitoring of action'. Brauner's model makes clear-and it is no coincidence that it shares this with Giddens' actor model-that we produce and change our individual cognitive maps of the world in iterative and recursive loops of practice-that is, in the practical application and reflection of these cognitive maps. Both therefore offer possible links to the most promising approaches of cognitive psychology that are not rationalistically pre-occupied, such as Neisser's perceptual cycle and its reworking by Karl Weick, both fine examples of thinking in terms of recursivity-in this case between perceiving and acting (Neisser 1979;Weick 1985Weick [1969], 223-226;], 223-226;cf. also Conrad and Sydow 1984, 73-92). Neisser's perceptual schemata are the results of a perceptual learning that enter into new acts of perception as active, informationseeking (individual) structures and are thereby recursively reproduced and possibly modified. Connections to questions of organisational learning are obvious and Weick elaborates them. We include not only Lorenzer's interaction designs but also these Neisserian cognitive structures in the personality structure of acting persons, and we are now prepared to connect these two descriptions of social and individual structuration. We will then see that the place of mediation between the individual and society-or organisation-is interaction 26 (Figure 4). 26 In the sense of social action which, as in Max Weber, finds meaningful orientation in the past, present or expected future behaviour of others. For more precise definitions and for deviations from Weber, see Ortmann (1995a, 295, fn. 4). If we replace the upper half of Figure 4 with Scott's layer model (1994,57), we can see that our model is compatible with a version of sociological neo-institutionalism focused on society-wide institutions and governance structures as well as organisational fields, into which the social structure represented in our scheme develops.
---
Organisation as Reflexive Structuration
Every interaction is simultaneously individual and social action and the perception of events and event-as-such, the putting-into-action of interaction designs and the occurring of social praxis in the medium of social structures. That every interaction is both at the same time does not imply that both are the same thing; nor does the fact that each is something very different imply that they are not related in a comprehensible and specifiable way. On the contrary, there is no "specific form of interaction" in Lorenzer's sense that can be realised in interaction wholly outside the medium of social structures, nor any social practice that is not somehow the realisation of individual drafts of interaction.
Furthermore, every interaction implies, on the side of social structures, (re-) production and institutionalisation (including modification) and, on the side of the individual, socialisation and internalisation, although we cannot reckon a priori with successful or even mutually harmonious processes of socialisation/ internalisation on the one hand or of reproduction/institutionalisation on the other. (And if they do ‛succeed' in the sense of achieving mutual correspondence, then we certainly do not have to like the result, as the case of Barnard's organisational personality makes clear. In Barnard's classic example of a switchboard operator at the New Jersey Bell Telephone Company, its conformity went quite far. She specifically chose a subordinate position in an outlying district because from there she could watch her sick mother's house while she worked. When the house caught fire one day, she stayed at her post and watched while the house burned down, showing, in Barnard's words, "extraordinary 'moral courage'" and "high responsibility" regarding the organisational norm of uninterrupted readiness to serve Barnard 1938, 269, Barnard left a footnote: Her mother was rescued).
Interactions, these central sites of mediation between the individual and society (or organisation), are therefore not to be characterised merely as agencies of socialisation (Lorenzer 1976, 44) but always simultaneously, with a view to society, as sites of (re-)production of social structures and institutions.27
---
Critique and Outlook
Despite the considerable spread of structuration-theoretical research in organisational theory in particular, the reception of Giddens is marked by noticeable deficits leaving important room for improvement, especially in the German-speaking world. This is due not only to the fact that the field he tills is occupied in this country by Habermas and his students, who have hardly ever spoken of organisations (and, to the extent that they have, have done so predominantly in the sense of Luhmann's systems theory). But Giddens himself has abetted this trend in a certain sense. It is not merely that he has written well over 20 books to date, making it difficult even for benevolent readers to follow the developments of his thought. Rather, it is precisely the reader who is still unfamiliar with Giddens' work who is left with the impression that Bernstein (1989, 27) has described as follows:
"One sometimes feels that Giddens is not always in control of the material he is discussing. Where one expects detailed explication and justification, too often there is repetition and 'eloquent' variation. Temperamentally, Giddens is foxlike in his approach to issues, although his systematic ambitions require him to be like the hedgehog. Given the sheer variety of topics, themes, and thinkers he treats, one can understand why he tells us [about his book Constitution of Society:] 'This was not a particularly easy book to write and proved in some part refractory to the normal ordering of chapters' (Giddens 1984, xxxv)."
Inaccuracies and inconsistencies concerning terminology and conceptualisation have rightly been pointed out and not only by critics such as Archer (1990) and Stinchcombe (1990). The German-language edition of Giddens' magnum opus, The Constitution of Society (Die Konstitution der Gesellschaft 1988), with its grave translation deficiencies, did not improve the situation. Yet even in the original English edition, the glossary of this work contains strange terminological deviations from the body of the text. There are categorical inconsistencies in Giddens' definitions of authoritative and allocative resources, for example-a meaningful distinction, which he, however, sometimes seems to confound with the distinctions "material/immaterial resources" and "technological/organisational resources" (Ortmann 1995a, 299, fn. 9). Bernstein (1989, 30, 33) has bemoaned deficits in how the theory deals with the justification of norms and in terms of its critical quality (see also Joas 1988, 23;for a response to this, Ortmann 1995a, 226-252). Gerstenberger (1988) has found the validity of historical claims lacking, a criticism whose relevance for the social-theoretical core of structuration theory would need to be discussed separately. It is thus not surprising that Giddens' writings have received a great deal of attention, but at the same time have triggered vehement controversies (for summaries, see the anthologies by Held and Thompson 1989;Clark, Modgil, and Modgil 1990;Bryant and Jary 1991).
Many critiques revolve around different ways of reading the basic theorem about the duality of structure. Archer (1990, 77), for example, charges that Giddens' concept of structuration oscillates between two divergent images: On one side there is the hyperactivity of agency, which contributes innately to the volatility of society; on the other side, the structural properties of society exhibit a rigid coherence, so that the aspect of stability is exaggerated. Outhwaite (1990, 85) has objected that Archer reads into Giddens' texts the very opposition between action and structure that Giddens is so intent on eliminating. Kieβling (1988, 232-244, 179-232) refers to a ‛standard critique' that accuses Giddens of subjectivist reductionism, but himself comes to the opposite conclusion that Giddens' theoretical apparatus suffers from an objectivist surplus (for a concise anti-critique, see Gondeck 1998).
Discussion of the mediation of action and structure continues. It is rather surprising that the theoretical debates have lacked any specific discussion of the mechanisms of mediation and, in particular, that the significant question of modalities in Giddens' theorem of the duality of structure has garnered little attention. Although our remarks on this will not be the last word, either, we believe we have at least suggested the way forward for discussion of this concept in Section 2.2.
In our view, further clarification of the mediation of action and structure could be fruitful for the interplay of theory development and empirical research-if expectations are kept in check:
"The concepts of structuration theory, as with any competing theoretical perspective, should for many research purposes be regarded as sensitizing devices, nothing more. That is to say, they may be useful for thinking about research problems and the interpretation of research results" (Giddens 1984, 326-327).
A look at the landscape of current empirical organisational and network research shows how much it stands to gain from theoretical inspiration. Giddens' structuration theory provides just such an inspiring theoretical framework-provided we do not limit ourselves to trying to fill it with empiricism but instead keep in mind the constitutive role of this ‛filling' for that framework and the considerable opportunities to absorb and productively work with the insights of other theoretical traditions and the established concepts of organisation theory. This then also opens up opportunities for communication between supposedly incommensurable discourses-one need only think of interpretative organisational research, economic and sociological neo-institutionalism or even distinctly structuralist or actiontheoretical approaches as well as the classical organisation theory of business administration. It then becomes only natural that in the course of its fulfilment such a framework should also be supplemented, perhaps even replaced-in a process of principally interminable critique. |
Background: As the promotion of alcohol and tobacco to young people through direct advertising has become increasingly restricted, there has been greater interest in whether images of certain behaviours in films are associated with uptake of those behaviours in young people. Associations have been reported between exposure to smoking images in films and smoking initiation, and between exposure to film alcohol images and initiation of alcohol consumption, in younger adolescents in the USA and Germany. To date no studies have reported on film images of recreational drug use and young people's own drug use. Methods: Cross sectional multivariable logistic regression analysis of data collected at age 19 (2002-4) from a cohort of young people (502 boys, 500 girls) previously surveyed at ages 11 (in 1994-5), 13 and 15 in schools in the West of Scotland. Outcome measures at age 19 were: exceeding the 'sensible drinking' guidelines ('heavy drinkers') and binge drinking (based on alcohol consumption reported in last week), and ever use of cannabis and of 'hard' drugs. The principle predictor variables were an estimate of exposure to images of alcohol, and of drug use, in films, controlling for factors related to the uptake of substance use in young people. Results: A third of these young adults (33%) were classed as 'heavy drinkers' and half (47%) as 'binge drinkers' on the basis of their previous week's consumption. Over half (56%) reported ever use of cannabis and 13% ever use of one or more of the 'hard' drugs listed. There were linear trends in the percentage of heavy drinkers (p = .018) and binge drinkers (p = 0.012) by film alcohol exposure quartiles, and for ever use of cannabis by film drug exposure (p = .000), and for ever use of 'hard' drugs (p = .033). The odds ratios for heavy drinking (1.56, 95% CI 1.06-2.29 comparing highest with lowest quartile of film alcohol exposure) and binge drinking (1.59, 95% CI 1.10-2.30) were attenuated by adjustment for gender, social class, family background (parental structure, parental care and parental control), attitudes to risk-taking and rule-breaking, and qualifications (OR heavy drinking 1.42, 95% CI 0.95-2.13 and binge drinking 1.49, 95% CI 1.01-2.19), and further so when adjusting for friends' drinking status (when the odds ratios were no longer significant). A similar pattern was seen for ever use of cannabis and 'hard' drugs (unadjusted OR 1.80, 95% CI 1.24-2.62 and 1.57, 95% CI 0.91-2.69 respectively, 'fully' adjusted OR 1.41 (0.90-2.22 and 1.28 (0.66-2.47) respectively). Conclusions: Despite some limitations, which are discussed, these cross-sectional results add to a body of work which suggests that it is important to design good longitudinal studies which can determine whether exposure to images of potentially health-damaging behaviours lead to uptake of these behaviours during adolescence and early adulthood, and to examine factors that might mediate this relationship. | Introduction
In high income countries there is concern about the consequences of excessive alcohol consumption [1], especially in youth when these behaviours are common [1][2][3][4] and may track into adulthood [5,6]. There is evidence of a "dramatic rise" in alcohol consumption in young people in the west of Scotland and in the UK more broadly [7]. The reduction of alcohol (mis)use, and binge drinking in particular, are priorities for the British Government [3]. This reflects concerns about public drunkenness and anti-social behaviour on the one hand, and the longer term health effects of excessive drinking, such as increased mortality in heavy drinkers [8].
There is also evidence from Scotland of an increase in the lifetime prevalence (ever use) of illicit drugs in recent decades, with ever-use of cannabis by young adulthood being much more common than ever-use of other drugs [9]. Cannabis use in young people is associated with psychotic symptoms and dependence on other illicit drugs [10,11], although there is debate over its health consequences [12]. Among young adults who have been long-term drug users, there is evidence of poor self-rated health and increased mortality [13,14].
This evidence on increasing substance use in adolescents and young adults, together with the lack of effective treatment for substance dependence [15], raises questions about which factors facilitate the uptake of excessive alcohol and drug use.
Media portrayals are one potential influence shaping young people's views of various behaviours. However, it has been demonstrated that portrayals of substance use in films are often unrealistic, as has been well documented for smoking [16][17][18]. They often glamourise smoking and make smoking appear to be more prevalent than contemporary figures support. Thus, despite the dramatic fall in adult smoking in the UK and USA since the 1950s, it has been suggested that smoking in films was as common in 2002 as in 1950 [19]. Smoking imagery declined in top US box office hits between 1996 and 2004, but not within films intended for youth audiences [20]. Similar findings have been reported for the most popular films in the UK, showing that despite a substantial fall between 1989 and 2008 overall, tobacco imagery appeared in 70% of all films, and predominantly in films categorised as suitable for children and young people [21]. This has alerted health professionals and policy-makers to the potential of media images to shape substance use in young people [22]. Evidence is now building to suggest a causal link between viewing images of smoking in films and young people's initiation of smoking [22][23][24][25][26]. To date, little attention has been paid to the influence of film images of other behaviours, such as alcohol and illicit drug use, on young people's own use of these substances.
Alcohol consumption is also very commonly portrayed in films, including in (US) G-rated (General Audience) [27] and animated [28] films. A content analysis of 100 of the top grossing US films between 1986 and 1994 reported that 96% had references that supported alcohol use, and 79% included at least one character who used alcohol. Whilst incidents of alcohol use were common, portrayals of the hazards of drinking were not reflected [29]. Similarly, a study of the most popular US film rentals from 1996-7 found 93% included alcohol use and 22% illicit drug use; in 12% of films one or more of the major characters used drugs and 65% of adult characters used alcohol; and in 43% of films alcohol use was portrayed as a positive experience [30]. A content analysis of the top grossing US films from 1999-2001 found 15% of teen characters used illicit drugs and again were unlikely to be shown as suffering any consequences (positive or negative, short or long-term) of their drug use [31].
In very recent years a few studies have reported an association between exposure to alcohol images in films and young people's own alcohol consumption [32][33][34][35]. These studies followed earlier ones which had demonstrated an effect of exposure to alcohol advertising, marketing and portrayals on young people's subsequent drinking behaviours [36]. Thus, in the USA, a strong relationship was seen between film alcohol exposure and onset of drinking in 3577 10-14 year olds who were never drinkers at baseline [32]. Cross-sectional associations between film alcohol exposure and drinking were observed in 5581 13-year olds from 27 schools in Germany. After adjustment (for socio-demographic, parenting and personal characteristics and friends' drinking), the odds ratios were 1.47 (95% confidence interval [CI] 1.19-1.82), 2.12 (95% CI 1.75-2.57) and 2.95 (2.35-3.70) for drinking without parental knowledge (comparing the higher three quartiles of exposure to the lowest) and 1.42 (0.93-2.28), 1.84 (1.27-2.67) and 2.59 (1.70-3.95) for binge drinking [33]. To our knowledge, no studies have reported on exposure to images of illicit drug use and own drug use.
Here we report a cross-sectional analysis which investigates the association between exposure to images of a) alcohol and b) drugs in films and a) current drinking and b) ever use of drugs in young adults (aged 19) living in the UK. We have previously reported a lack of association between exposure to smoking in films and smoking in these young adults [37]. As a number of factors may confound any relationship between film exposure and substance use [36], we adjust for gender, background characteristics, personal characteristics, friends' substance use and time spent watching television, videos or dvds.
---
Methods
---
Sample
Data are from the West of Scotland 11 to 16/16+ Study, a longitudinal study of health and lifestyles in a single year cohort [38]. Respondents were recruited in 1994-5 during their final year of primary schooling (age 11) and re-surveyed at ages 13 and 15 (in the 43 secondary schools to which they transferred), and at age 19 after leaving school. At 11, parental questionnaires were completed for 86% of the sample. The study received approval from the University of Glasgow Ethics Committee for Non-clinical Research Involving Human Subjects and (for school-based stages) participating Education Authorities and schools. Respondents were invited to take part via letters with information sheets detailing the survey procedures. Prior to participation they signed a consent form confirming that they had read the information sheet, had the study explained to them and understood what it involved, that the information they would provide was confidential and would be identifiable only by an ID number and that they could choose not to answer any questions they wished.
Because of the school-based nature of the sample, the sampling scheme involved several elements to ensure representativeness at both the primary and secondary school stages, as reported elsewhere [7]. In brief, the survey used a reverse-sampling procedure which randomly selected the 43 secondary schools stratified by level of deprivation and religious denomination, with a separate stratum for independent and state-run schools. These schools were used to select a random sample of 'feeder' primary schools (traditionally linked with the secondary schools), together with primaries making a high number of parental placing requests. Within these 135 primary schools, classes were randomly selected, with all pupils in selected classes eligible to participate. Of the 2793 pupils who attended the targeted secondary schools, 2586 (93%) participated in the baseline (age 11) survey, 85% in the survey at age 13, and 79% in the survey at age 15. As expected, losses to follow-up increased in the post-school period, reducing the sample size to 1256 (45%) at age 19. Full details of the sampling strategy are available elsewhere [39].
The baseline sample was representative of 11 year olds in the study area in respect of sex and socio-economic status (SES) [40]. Differential attrition made subsequent waves less representative; for example, attrition was higher among lower SES groups, school truants, early school leavers, and smokers. Probabilistic weights have been derived at each wave to compensate for nonresponse [40,41], adopting the system of weighting proposed by Little and David [42]. As these factors could be related to alcohol and drug use, we report results based on weighted data at age 19 (n = 1006 -because only those who completed all waves were assigned a weight). Unweighted analyses are available on request.
Each school-based survey included self-completion questionnaires administered in exam-type conditions. At age 19, respondents were interviewed by nurses using computer aided personal interviews.
---
Measures
---
Exposure to alcohol and drug-taking in films
To estimate the amount of alcohol and illicit drug use that the respondents had seen in films ('film alcohol exposure' and 'film drug exposure') we aimed to replicate methods developed by Sargent and colleagues [23][24][25]43] as closely as possible. At age 19, respondents were asked to indicate in a self-completion questionnaire whether they had seen each of a unique list of 50 films randomly selected from a sample of 601 films released between 1988 and 1999; hence each respondent's list of 50 was different. The 601 films included the USA's 25 top box-office hits from 1988 to 1995 (n = 200); the top 100 box-office hits in 1996, 1997 and 1998 (n = 300); the top 50 box-office hits from the first half of 1999; and 51 additional films which featured stars popular amongst adolescents [25].
Trained coders have recorded the number of seconds of alcohol and drug use in each film as described elsewhere [32]. Alcohol use was defined as consumption of a beverage that was clearly alcoholic, implied possession of such a beverage (e.g. a character sitting in a bar with a filled beer glass), or purchasing alcohol. Excluded were occasions when a character had an empty alcoholic beverage container (e.g. empty beer bottle) or when alcoholic beverage containers were displayed but were not implied as being consumed (e.g. bottles shown above a bar). Drug use included actual or implied use (e.g. a character saying that they had used drugs just prior to a scene) or specific preparation for use (e.g. rolling a joint) as well as drug dealing. It also included use of drugs prescribed for another person, but not use or misuse of a person's own prescription drug.
An index of film alcohol use was calculated by summing the seconds of alcohol use in the films that each respondent had seen from his/her list of 50 films. This number was divided by the seconds of alcohol use they would have viewed if they had seen all 50 films on their list. This proportion was multiplied by the seconds of alcohol use in the full sample of 601 films, to provide an estimated exposure to alcohol in all 601 films given their viewing habits (see [32]). A separate index of film drug use was calculated in an analogous fashion (i.e. summing the seconds of drug use in each film and dividing it by the number of seconds of drug use they would have viewed if they had seen all 50 films on their list). The total estimated exposures for each respondent were translated into minutes. One case who did not complete a film list and two who reported having seen all 50 films on their list were excluded (resulting weighted N = 1002). The estimated film exposure variables were then classified into quartiles. Cut offs for the film drinking exposure were: 0.4-476 minutes for the lowest, 477-691 for the 2 nd , 692-931 for the 3 rd , and 931-2017 for the highest quartile; those for the film drugs exposure were 0-7 minutes, 8-30 minutes, 30-70 minutes and 70-175 minutes.
---
Alcohol
At age 19, current drinkers reported the quantity of a range of alcoholic drinks consumed each day over the past week. This was summed over the last week; never and ex-drinkers were assumed to have consumed 0 units. Dichotomous measures were derived. We followed the UK Royal College of Psychiatrists' guidelines to define 'binge drinking' (females were defined as a binge drinker if they had consumed over 6 units in any single day in the last week, and males if they had consumed over 9 units [1,44]). 'Heavy weekly drinkers' were those exceeding current guidelines (over 14 units per week for females, over 21 units for males) [1,44].
---
Drug use
Respondents indicated which drugs they had ever used from a list which included common street names (e.g. cannabis [hash, grass, dope]; temazepam [jellies, ruggers, eggs, Gellphix]). Because of the differing social characteristics of people who have only ever used cannabis vs other drugs [9], we report here two separate outcomes: ever use of cannabis and ever use of 'hard' drugs, defined, following recommendation by the Prevention Working Group of the UK Advisory Council on the Misuse of Drugs [9] as temazepam, tranquillisers, heroin, methadone, temgesic, cocaine, crack and morphine or opium.
---
Parental social class
Occupational data from parents at age 11 were used to derive a head of household classification (using father's current occupation or previous if not currently working, or if no father, the mother's current or previous occupation) (from herein referred to as 'social class'). Where no parental data were available, information from the young person (at age 11) on current parental occupation was utilised; the reliability of these data is high [45]. Social class data (classified using the UK Registrar General's Classification of Occupations [46]) were collapsed into four categories: non-manual (white collar and professional) occupations (class I, II and IIINM); skilled manual (blue collar) (class IIIM); semiskilled and unskilled manual (class IV and V); and missing.
---
Parental structure
At age 15 respondents reported which parental figure(s) they lived with (classified here as: both birth parents; one birth parent and new partner; one birth parent alone (the majority) or other relatives (e.g. a grandparent)). The very few cases with no parent (e.g. with foster parents) were excluded because information on parental care/control and household variables could not be consistently evaluated by the respondents.
---
Parental Bonding Inventory
At 15, respondents completed the Brief Parental Bonding Instrument (PBI) [47] which provides scores for parental care and (over)control ranging from 0-8 (higher scores representing greater perceived care and control). Each scale was collapsed into three categories for crosstabulations but used as a continuous variable in the logistic regressions.
---
Attitudes to risk and rule-breaking
At 15, respondents rated themselves in relation to risktaking ('I take risks') and rule-breaking ('I am a rule breaker'), with response categories 'very true', 'true', 'untrue' and 'very untrue'.
---
Qualifications by age 19
Respondents were dichotomised into those who had obtained any 'Highers' at school (Scottish qualifications, generally taken at age 16-17, required for entry into higher education) vs none.
---
Friends' alcohol and drug use
At 19, respondents reported how many of their friends engaged in various activities, with seven categories ranging from 'none' to 'all'. Two dichotomous measures were derived for the crosstabulations: whether half or more of their friends drank, and used cannabis.
---
TV, video and dvd use
At 19, respondents reported how many hours each week and weekend day they usually spent watching television, videos or dvds. The total hours per week were categorised as 0-9, 10-19, 20-29, 30-39 and 40+ hours for crosstabulations.
---
Analysis
Crosstabulations were used to compare the proportions for the four outcomes at age 19 according to quartiles of alcohol and drug exposure (as appropriate), and potential confounders. A series of logistic regression models was then run for each outcome. Multivariate models were built sequentially. First, the unadjusted relationship with the relevant film exposure was assessed. Subsequent models adjusted for gender, then additionally for: background variables (social class, parental structure, care and control); personal characteristics (risk-taking, rulebreaking, qualifications); friend's drinking or drug use; and finally for hours per week watching television, videos or dvds. We present weighted data but analyses using unweighted data produced similar results (available on request). Crosstabulations and logistic regression analyses excluded respondents who had missing data on any of the potential confounders in the final multivariate model (resulting N in respect of heavy drinking = 922; binge drinking = 928, ever use of cannabis and ever use of 'hard' drugs = 926).
---
Results
Basic descriptive characteristics of the sample are shown in Table 1. Substance use was common. A third (33%) of the young adults were classed as 'heavy drinkers' and half (47%) as 'binge drinkers'. Over half (56%) reported ever use of cannabis, but many fewer (13%) reported ever use of one or more of the 'hard' drugs listed. Almost all (93%) reported that half or more of their friends drank alcohol (equivalent figure for ever use of cannabis, 21%).
Respondents had seen a mean of 19.0 (SD = 7.3, range 1-44) of the 50 films presented to them; mean film alcohol and drug exposures were 726 minutes (12.1 hours) and 45 minutes respectively. The mean number of films was higher for males (20.8) than females (17.3, F = 60.5, p = .000) and males' film alcohol and drug exposures were higher (770 vs. 682 minutes, F = 16.5, p = .000 and 51 vs. 38 minutes, F = 23.1, p = .000 respectively). There were no social class differences for films seen or film alcohol exposure, but film drug exposure was higher in those from higher social class backgrounds (non-manual = 51, skilled manual = 40, semi/unskilled manual = 41 minutes, F = 6.6, p = .001). There was a positive correlation between the film alcohol and drug exposure measures (r = .510).
Table 2 reports the percentage of heavy and binge drinkers by quartile of film alcohol exposure, and the percentage of ever users of cannabis and ever users of 'hard' drugs by film drug exposure. The p values reported in the table relate to heterogeneity within the groups, but we also tested for linear trends. In the cross-tabulations, the tests for linear trends in the percentage of heavy drinkers (p = .018) and binge drinkers (p = 0.012) by film alcohol exposure quartiles (see Table 2) were statistically significant. Similarly, there was an increase in the percent who had ever used cannabis with each quartile of film drugs exposure (linear trend p = .000). The percentage who had used 'hard' drugs was also highest in the highest quartile of film drug exposure (16%), but with less evidence of a stepwise increase (linear trend p = .033).
Male gender and perceiving oneself as a risk-taker and rule-breaker were associated with all four substance use measures (p < 0.001 in all cases), and having no 'Highers' at 19 with all (p = 0.004 for heavy drinking, and p < 0.001 for ever use of cannabis, and ever use of hard drugs) except binge drinking (p = 0.65). Respondents from manual class backgrounds were more likely to have used 'hard' drugs (p = 0.037), and those reporting lower parental care were more likely to have ever used both cannabis (p = 0.003) and 'hard' drugs (p = 0.012).
Friends' drinking and cannabis use were strongly associated with own drinking and drug status respectively. There were no associations between the substance use measures and parental structure, parental control, or hours per week watching television, videos or dvds. Table 3 shows the results of the logistic regression models for each outcome, both before and after adjusting for potential confounding or mediating variables. We consider the alcohol outcomes first. In the unadjusted model, those in the highest quartile of film alcohol exposure were more likely to be classed as both heavy and binge drinkers ((OR = 1.56, 95% CI 1.06-2.29) and 1.59 (1.10-2.30) respectively, compared with the lowest quartile) on the basis of their reported alcohol consumption the previous week. Adjustment for gender reduced the associations, but further adjustment for background characteristics returned the odds ratios for the drinking measures to the unadjusted levels. Adjusting for risk-taking, rule-breaking and qualifications, and particularly for friends' drinking status, reduced the odds ratios, but further adjustment for hours watching television, videos or dvds made no difference to the associations. In this final model only gender (OR for females 0.59 (95% CI 0.43-0.80) for heavy drinking and 0.64 (95% CI 0.48-0.85) for binge drinking) and friends' drinking status (OR for half or more friends drinking 1.44 (95% CI 1.27-1.64) for heavy drinking and 1.54 (95% CI 1.36-1.73) for binge drinking) had odds ratios which did not include unity (1.00), i.e. film alcohol exposure was no longer significantly associated with heavy or binge drinking. (Full tables showing OR and 95% CI for all variables included in all models available on request.)
We turn now to consider the relationship between exposure to film images of illicit drug use and ever-use of cannabis and 'hard' drugs. In the unadjusted model, ever use of cannabis showed a stepped association with film drug exposure (OR for third and highest, compared with the lowest quartile of film drug exposure = 1.46 (95% CI 1.01-2.10) and 1.80 (95% CI 1.24-2.62)). Adjustment for gender attenuated the association, whereas adjusting additionally for family background made little difference. The OR was further attenuated (with 95% confidence which included unity) after adjusting for personal characteristics, friends' reported cannabis use, and then tv/dvd/video watching (see table 3). In the final model having any 'Highers' (OR 0.56, 95% CI 0.39-0.80), seeing oneself as a rule-breaker (OR 10.70, 95% CI 3.43-3.38, comparing those saying 'very true' as compared with those saying 'very untrue'), and reporting that half or more of one's friends used cannabis (OR 2.14, 95% CI 1.87-2.45) were the only ORs in the model with 95% confidence intervals that did not include unity.
For 'hard' drug use the confidence intervals for the unadjusted ORs in third (OR 1.30, 95% CI 0.75-2.26) and highest (OR 1.57, 95% CI 0.91-2.69) quartiles overlapped with unity. Although the odds were greatest in the highest film drug exposure quartile in each of the models for ever use of 'hard' drugs, none of the associations reached conventional levels of significance, even in the unadjusted model. In the final model hard drug use was significantly inversely associated with having any 'Highers' (OR 0.26, 95% CI 0.15-0.45), and positively associated with seeing oneself as a rule-breaker (OR 3.02, 95% CI 1.05-8.69, comparing those saying 'very true' as compared with those saying 'very untrue') and reporting that half or more of one's friends used cannabis (OR 2.10, 95% CI 1.76-2.51).
---
Discussion
In this cross-sectional analysis, we have demonstrated an association between film exposure to alcohol and both binge and heavy drinking in young adults, and, to our knowledge for the first time, an association between film exposure to illicit drugs and ever use of cannabis. These associations persisted after adjusting for gender, social class, family structure and levels of parental control, but not after adjusting for other variables, including personal characteristics such as risk-taking, rule-breaking and achievement of school qualifications, and in particular friends' substance use. It is somewhat difficult to know how to interpret these attenuations in the associations, particularly in this cross-sectional analysis. It is likely, for example, that young people who drink heavily or take drugs are not only more inclined to do this in the company of like-minded friends, but they may also share, or develop similar tastes in cultural representations of substance use with them, which may in turn determine the kinds of films they choose to watch. On the other hand, portrayals of substance use could directly influence an individual's uptake of drinking and drug use which could itself influence the friendship groups that they choose to maintain or develop.
The cross-sectional nature of the analysis thus means that it is not possible to establish the direction of causality. Even before concerning ourselves with the impact of potential mediating or confounding factors, we cannot distinguish here between two plausible but competing explanations, either that film images of substance use may influence behaviours or that people who have already adopted particular patterns of substance use may choose to watch films that reflect similar lifestyles and values. Furthermore, it is important to acknowledge that images of substance use in films occur within a wider media context in which a vast array of different images are portrayed over time from a variety of sources (including magazines, TV, newsprint, websites and social messaging sites). A few other studies (e.g. [32,33]), one including prospective data [32], have reported an association between film alcohol exposure and drinking in younger adolescents, using similar methods. A German study (mean age 13) obtained much stronger associations between film alcohol exposure and measures of drinking, before and after adjustment for a comparable set of potential confounders [33].
Our findings of some association between exposure to film images of alcohol and illicit drugs and young people's own substance use in this cross-sectional analysis are of interest, particularly because, in contrast to other studies which have reported to date (e.g. [23][24][25][26]), we did not see any association in this study population between exposure to smoking in films and young people's own smoking at age 19 [37]. We speculated that this lack of association with smoking may be attributable to several factors; these factors could also explain the smaller association we observe in this UK study between film alcohol exposure and drinking in comparison with the USA and Germany.
First, there are methodological issues, one of which relates to respondent age. Our study differs from previous research studies which have focussed on (early) adolescent experimentation with smoking and drinking. It is plausible that, by age 19, other influences (e.g. direct observation or substance use amongst peers) could have had such a strong effect that the impact of exposure to these behaviours in films is 'swamped'. Young adults may also have a more sophisticated and critical reading of media images which makes them more resistant to their effects. A second methodological issue relates to the timing of the film exposure. We used coding of substance use in films completed by our American colleagues at the time of our fieldwork . At this point coding was only available on films up to and including 1999 (when our sample were aged 15). Hence we missed exposures to more contemporaneously released films.
Our second group of potential explanations for a lack of an association between smoking in films and own smoking in these young people [37] related to the cultural environment and the prevalence and social prominence of the behaviours in question. Although the mass film industry is increasingly globalised, it is plausible that Scottish viewers empathise less with Hollywood film stars, or are distanced from American culture. Fictional or real-life visual portrayals of substance use in TV programmes (such as soap operas), popular with young people in the UK, may be more salient in the Scottish context.
Another potential difference lies in the prevalence of substance use in the various countries which have been studied. Scotland is commonly described as having an 'alcohol culture'. Compared with most other European countries, where levels have remained static or fallen [48]. Against such a background, any impact of the portrayal of substance use in films may be diminished.
Additional caveats that we raised in our previous paper on smoking [37] are also relevant here: we have no measure of how accurate young adults' recall of the films they had seen was; and we did not record whether films had been viewed once or repeatedly. Also alcohol and drug use were self-reported in the study (as in most similar studies), although our interviewers went to some lengths to ensure confidentiality and privacy whilst reporting on substance use. Furthermore, alcohol use measures were based on reports of consumption in the last week and this may not have been representative of the usual pattern and frequency of drinking in every individual.
Other limitations that we have raised earlier in this paper are important to rehearse. There was considerable and differential attrition between the first wave of the study (when 11 year old pupils were representative of all 11 year olds in the areas in which they lived) and the wave of data collection at age 19 years. Although we selected a weighting system designed to address differential attrition, it is possible that some residual attrition bias remains. For the alcohol variables we were able to use current measures of consumption as our outcome, whilst for the drug use variables we were only able to analyse ever-use. In the latter case we cannot know when this drug use took place or for how long it was a feature of the young person's life.
Our measures of film exposure are comparable to those reported previously. For example, a study of American 10-14 year olds, based on the same parent film sample reported that respondents had seen a median of 16 of the 50 films on their unique list (compared with 19 in our study), which translated into a median exposure to alcohol use of 8.3 hours in the sample of 601 films (compared with a mean exposure of 12.1 hours in our study). The relatively higher alcohol exposure would be expected, given the nature of films likely to have been watched by the older adolescents in our study.
---
Conclusion
Our finding of an association between estimated exposure to film images of alcohol use and young people's current use of alcohol from this cross-sectional study is consistent with findings from other recent studies. The association we report for exposure to film images of illicit drugs and ever use of cannabis suggests that this may be an important relationship to explore in future well-designed longitudinal studies which are able to examine whether exposure to images of drugs in films is related to the initiation of illicit drugs use. Such studies could also explore whether the types of images (e.g. 'glamourised' or 'normalised' images of alcohol and illicit drug use as compared with negative or neutral images) affect different groups of young people in different ways.
---
Competing interests
The authors declare that they have no competing interests.
Authors' contributions KH, JS and HS specified the analyses to be undertaken and led on interpretation of the findings. All drafts of the paper were written by KH, with input from all co-authors. HS, PW and RW oversaw the design and data collection for the 11 to 16/16+ Study. HL and HS undertook the statistical analyses. JS oversaw the coding of images of drinking alcohol and drug use in the films. All authors read and approved the final manuscript. |
During later life, inadequate social interactions may be associated with worse quality of life in older adults. Rural older adults are prone to developing unhealthy lifestyles related to social activities, which can lead to a poorer quality of life than that enjoyed by older adults living in urban areas. This study aimed to describe longitudinal changes in social activity participation and health-related quality of life among rural older adults, exploring potential associations with changes to in-person social activity over four years. We used prospective community-based cohort data from the Korean Social Life, Health, and Aging Project (KSHAP) collected between December 2011 and January 2016. The sample included 525 older adults who completed the measure of health-related quality of life. Our results showed a significant change in health-related quality of life according to changes in participation in meeting with friends. Even though an individual's participation in other social activities did not show significant differences in health-related quality of life, our findings imply that in-person social activities may be an important resource to encourage participation in physical activities and to develop other positive outcomes, such as a sense of belonging or satisfaction with later life, among rural older adults. | Introduction
Population aging is a rapidly growing global challenge. According to the report from the United Nations in 2019, the number of individuals worldwide, aged 65 or older will increase by almost 80% over the next three decades [1]. In Europe and North America, major industrialized parts of the world with low crude mortality and low fertility rates, more than 25% of the population is projected to be aged 65 or older by 2050 [1]. Among developed nations, South Korea is facing a particularly swift problem of population aging. By the year 2030, the percentage of older individuals (65 years or older) is projected to be near 25% in South Korea, which is higher than the proportions estimated in countries like England (21.9%), the United States (19.7%) and China (16.2%) [2]. Similar to the cases of other Western [3] and Asian countries [4], population aging is a more prominent concern in rural areas of South Korea because many younger people migrate to urban areas for education or employment, but rarely return to their home town, leaving the remaining population disproportionately older [5].
In the context of this global trend toward extended lifespans, promoting healthy aging is becoming more important than ever; that is, maintaining well-being in older age by developing and maintaining functional ability [6]. In older people, declining physical function associated with biological aging may be a natural and irreversible process [7]. Given this biological challenge, it is particularly important for older people to optimize their health-related quality of life [8,9]. Health-related quality of life is a broad, value-laden concept that reflects how individuals perceive the impact of physical and/or mental well-being on their ability to fulfill daily functioning and interactions with others [10]. A number of factors, including socioeconomic status [11,12], health behaviors [13], and living arrangements [14] are related to the promotion of healthy aging; among these, participation in social activities has been reported as an important determinant of health-related quality of life among older adults [4,15].
As they age, many older people experience changes in their participation in social activities due to major life transitions such as retirement, the death of close family or friends, or declining physical functions. Studies have reported that the reduction in social networks common among older people can increase social isolation and loneliness, which have been identified as public health concerns due to their negative impact on physical and mental health [16][17][18]. According to previous studies [19][20][21], the contribution of participating in social activities improved the physical and mental health of older people.
For understanding social activities and health status in older people, it is important to address variations in the living environment, such as access to transportation, local safety, neighborhood stability, social and local climate, and so on, due to their influence on active and healthy aging [22]. In general, living in a rural area often limits a person's access to resources that are critical to health, such as education, jobs, or clinics and hospital facilities [23,24]. Moreover, rural areas in many countries experience more pronounced population ageing and are likely to have higher rates of poverty and greater rates of chronic diseases than urban areas [3,24]. Thus, rural health for promoting safety and a healthy life would be vital among the rural population characterized by the ageing farming population [4,22]. In South Korea, despite the contribution of economic advancement to reducing rural-urban disparities in public services and welfare programs, health inequality among rural older adults remains a major public health issue [25]. However, findings from previous studies that have examined rural-urban differences in social activities have been inconsistent [26][27][28][29]. This may result from a number of factors, such as rapidly changing rural-urban boundaries, especially in developing countries, as well as increasing diversity in social determinants of health within the older adult population. While health-related quality of life has been the main outcome of interest in research targeting the older population, few studies have longitudinally explored quality of life during the senior years, examining its association with changes to in-person social activity among community-dwelling older adult populations in rural areas [30][31][32][33]. Therefore, the main purpose of this study was to (1) describe longitudinal changes of participation in social activities and health-related quality of life and (2) to explore the associations between changes in in-person social activities and health related quality of life among rural older adults over four years.
---
Materials and Methods
---
Study Design and Participants
This study is a part of the Korean Social Life, Health, and Aging Project (KSHAP) project, which was a community-based, longitudinal study. The KSHAP aimed to understand current health status, trends and determinants of health, and social network characteristics among older Koreans dwelling in a rural community: Township K, Gangwha-gun, Incheon, South Korea [34]. More than 42% of residents in Gangwha-gun engage in agriculture and about 40% of the area is farmland [34]. Township K is a typical rural Korean community in which most residents live by farming [35]. In 2013, the total population of Township K was 1864 individuals from 871 families. The KSHAP study targeted the entire older adult population aged 60 years or older, as well as their spouses; the age criterion was based upon the standard of an older age pensioner set by the National Pensions Act [35]. Detailed information on the KSHAP has been provided elsewhere [34]. The survey questionnaires included questions on general sociodemographic characteristics, health history, social network characteristics, health-related quality of life, and other physical and psychosocial functions [34]
---
Measurements
---
Sociodemographic and Health Characteristics
Variables related to sociodemographic characteristics included age, gender, work status, marital status, religion, and education. Gender was dichotomized by either male or female. Age was calculated by subtracting the participant's birth year from the year of survey. Work status was categorized as "yes" or "no." Marital status was categorized as "living with spouse," "separated," "widowed," "divorced," or "never married." Religion was categorized as "no religion," "protestant," "Catholic," "Buddhist," or "other." Education was categorized as "no education," "elementary school," "middle school," "high school," or "college or higher".
Variables related to health characteristics included Body Mass Index (BMI), smoking status, drinking habit, medical comorbid conditions, gait speed, mental status. BMI (kg/m 2 ) was calculated using height and weight and then categorized according to the geriatric BMI groups with "underweight" being a BMI < 22.5 kg/m 2 , "normal" being a BMI 22.5 kg/m 2 to 24.9 kg/m 2 , and "overweight/obese" being a BMI ≥ 25 kg/m 2 [36]. Smoking status was categorized as "past and current smoker," "past smoker but not now," "have never smoked," or "recently started smoking." Drinking habit was categorized as "never," "rarely," or "once a week or more." Regarding medical comorbid conditions, self-reported diagnosis of hypertension, hyperlipidemia, arthritis, or osteoporosis was included. Cognitive status data was obtained using the Mini-Mental Status Examination for Dementia screening (MMSE-DS) Korean version [37]. To measure gait speed, Timed Up and Go (TUG) was obtained by a trained data collector [38].
---
Health-Related Quality of Life
Health-related quality of life was measured using the Short Form Health Survey, 12-item version (SF-12) [39]. Participants were asked to rate each item on a five-point Likert scale: 1 ("poor"), 2 ("somewhat poor"), 3 ("good"), 4 ("very good"), or 5 ("excellent") [35]. The SF-12 consists of the physical component summary (PCS) and the mental component summary (MCS). The PCS and the MCS are standardized (mean = 50, standard deviation (SD) = 10). For each of the eight domains, the items are summed and then converted to a 0-100 scale. Higher scores indicate better physical and mental health-related quality of life.
---
Social Activities
---
In-Person Social Activities
In the KSHAP study [34], in-person social activities were defined as official or unofficial participation in social activities outside of activities related to earning income. In this analysis, we included four activity types: volunteering, religious activities, hobbies, and meeting with friends. In the Wave 1 survey, participants were asked to respond either "yes" or "no" to each item. In the Wave 4 survey, participants were asked to rate the frequency in which they participated in each activity over the prior year by choosing one of the following options: 1 ("several times per week"), 2 ("once a week"), 3 ("once a month"), 4 ("several times per year"), 5 ("once or twice a year"), 6 ("fewer than once a year"), or 7 ("not at all"). For the present analysis, we recoded response options used for Wave 4 into two categories: "yes" (more than once or twice a year, collapsing the options 1 through 5), and "no" (fewer than once a year or not at all, collapsing the options 6 and 7).
---
Social Networks
Social networks were measured by network size and network density. Network size (i.e., discussion network members) refers to the number of individuals with whom participants can discuss the important topics in their lives. In the parent study, each participant was asked to list the names of a maximum of five discussion network members with whom they had interacted during the last 12 months. When the individual's spouse was included, the maximum number of discussion networks could be six.
Network density refers to the number of actual relationships that existed among the members of an individual participant's social network out of the total possible number of relationships [40]. For evaluation, participants were asked to indicate the frequency of their interactions with each discussion network member based on an eight-point scale that ranged from "every day" to "less than once per year." In the parent study, if the participant reported that they "have spoken to each other at least once per week," a relationship is assumed to exist between the two network members. Network density can range from 0 to 1. A higher score indicates better social connectedness among members within the network.
---
Ethical Considerations and Data Collection
This study was approved by the Institutional Review Board of Yonsei University (IRB Approval No.: YUIRB-2011-012-01) and conducted following the Declaration of Helsinki guidelines. All participants had the opportunity to ask questions after a full review of the study protocol with a data collector and signed a written informed consent before their participation. Trained data collectors conducted surveys via face-to-face interviews in the participants' homes or at the local community center. Completing the survey took an average of 48 min.
---
Data Analysis
The data were analyzed using IBM SPSS Statistics for Windows, version 25 (IBM Corp., Armonk, NY, USA). Descriptive statistics were reported for all variables. Comparisons were made between Wave 1 and Wave 4 using a paired sample t-test for continuous variables and chi-square test for categorical variables. Comparisons between Wave 1 and Wave 4 were also made within each gender category using the same analysis. Individuals' participation in each social activity from Wave 1 to Wave 4 was summarized by four change categories: (1) answered "yes" in both Wave 1 and Wave 4 ("yes + yes"), (2) answered "yes" in Wave 1 but "no" in Wave 4 ("yes + no"), (3) answered "no" in Wave 1 but "yes" in Wave 4 ("no + yes"), and (4) answered "no" in both Wave 1 and Wave 4 ("no + no"). One-way ANOVA and the Bonferroni post-hoc tests were used to compare the changes in quality of life (mental and physical) from Wave 1 to Wave 4 by the change categories in social activities. We conducted further analysis to compare gender differences in social activities with statistically significant differences in health-related quality of life. Statistical significance was set a priori as p < 0.05.
---
Results
---
Sample Characteristics
As shown in Table 1, the average age of the participants was 71.2 (Wave 1) and 75.1 (Wave 4). The majority of the participants were women (58%) and did not finish high school (87%). The mean BMI was 24.18 kg/m 2 and about 64% of the participants were either overweight (23-24.9 kg/m 2 ) or obese (>25 kg/m 2 ). From Wave 1 to Wave 4, the proportion of participants who reported living with a spouse significantly decreased (76% vs. 70.35, p < 0.001). The proportion of participants who reported working (i.e., employed) showed a significant increase from Wave 1 to Wave 2 (73.3% vs. 75.8%, p < 0.001). Participants reported smoking and drinking less in Wave 4 compared with Wave 1 (p < 0.001). Their gait speed, measured by the Timed Up and Go test, decreased from 12.8 to 13.2 s (p < 0.001); however, the test result was above the normative reference of their age group (7.1-12.7 s) and below the cut-off point for high risk of falls (i.e., 14 s). Over half of participants reported hypertension (52.8% at Wave 1, 58.5% at Wave 4). The proportion of participants with major chronic conditions (i.e., hypertension, hyperlipidemia) increased from Wave 1 to Wave 4 with the exception of those with osteoporosis. Both men and women showed similar trends of change in these characteristics from Wave 1 to Wave 4. As shown in Table 2, the proportion of in-person social activity participation was higher in Wave 4 than in Wave 1 for all activities: volunteering (p = 0.004), religious activities (p = 0.001), meeting friends (p = 0.004), and hobbies (p = 0.004). These trends were consistent in both men and women with the exception of volunteering, in which more women reported participation in Wave 4 (p = 0.044) whereas men reported no change (p = 0.096). Additionally, the number of participants in each change category by social activity type is summarized in Figure 1. In all four social activities, the majority of participants showed no change in participation from Wave 1 to Wave 4 (i.e., "yes + yes" and "no + no"). This trend was consistent in both men and women (Figure 1). Regarding social networks, there was a significant increase in discussion network size from Wave 1 (2.41) to Wave 4 (2.78; p < 0.001). This increasing trend was consistent in both men and women.
However, there was no significant change in network density from Wave 1 (0.98) to Wave 4 (0.96; p = 0.178). Men and women showed different trends in the change of network density. In men, the mean network density significantly decreased from Wave 1 (0.99) to Wave 4 (0.96; p = 0.006) while no significant change was seen in women (p = 0.737).
Regarding health-related quality of life, participants' PCS scores significantly decreased from Wave 1 to Wave 4 (p < 0.001). In contrast, MCS scores significantly increased from Wave 1 to Wave 4 (p < 0.001). These patterns of change were consistent in both men and women.
---
Changes in Health-Related Quality of Life According to Changes in in-Person Social Activities
We compared the changes in PCS and MCS scores from Wave 1 to Wave 4 among the four change categories in each social activity type. In the category of "meeting friends," there was a significant difference in the changes in PCS and MCS scores, F(3, 518) = 4.275, p = 0.005, F(3, 518) = 2.813, and p = 0.039, respectively. Those who had previously participated but had currently stopped participating in meeting friends (i.e., the "yes + no" group) showed the highest reduction in PCS scores among the four categories (diff = -6.12) and this reduction was significantly higher than that of all other groups (ps < 0.05). There were no other significant differences in the changes of PCS and MCS scores among the change categories for other types of in-person social activity.
In order to identify the differences in the changes in PCS and MCS scores by the change categories in meeting friends, post-hoc comparisons were performed using the Bonferroni adjustment. Regarding the PCS, the participants who reported "yes + yes" reported a significantly greater increase than the participants in the "yes + no" category (p = 0.04). Participants who reported "yes + no" reported a significantly greater decrease in PCS scores than the participants in the "no + yes" category (p = 0.004). Moreover, participants who reported "no + no" reported a significantly greater increase in PCS score than the participants who in the "yes + no" category (p = 0.020). Regarding the MCS, participants who reported "yes + no" reported a significantly greater decrease than the participants in the "no + no" category (p = 0.037).
To further explore whether there was any gender difference in the effects of meeting friends, we compared the changes in PCS and MCS scores from Wave 1 to Wave 4 in men and women (Figure 2). In women, there was a significant difference in the changes in PCS and MCS scores among the change categories: F(3, 299) = 3.997, p = 0.008; F(3, 299) = 2.808, p = 0.040, respectively. The results of the post-hoc comparison revealed a significantly greater decrease in PCS scores among female participants who reported "yes + no" compared with those who reported "no + yes" (diff = -6.67, p = 0.036) and "no + no" (diff = -5.68, p = 0.016). The mean MCS score decreased in female participants who reported "yes + no," whereas the score increased in all other categories. There was a significant difference in the MCS score in the "yes + no" category and those in the "no + no" category (p = 0.026) among female participants.
---
Discussion
An important indicator of healthy aging in older people is the preservation of good physical and mental health while living in a familiar environment [41]. In developed countries, rapid industrialization has led to economic development and improved housing, educational opportunities, and public health access in urban areas [42]. Simultaneously, however, a rapid decline in the rural population has not only affected the agricultural labor force, but also the typical family structure, such as nuclear family configurations, which may increase the risk of isolation and poorer health-related quality of life among rural older adults [42]. This study provided insights into the longitudinal change of health-related quality of life (i.e., PCS and MCS scores of and association between changes in social activities (i.e., in-person social activities and social networks) and health-related quality of life among older adults living in a rural village in South Korea.
---
Longitudinal Changes in Health-Related Quality of Life
Regarding the longitudinal change of SF-12 scores in our sample, both female and male older adults reported a significant change over four years: participants reported a decrease in PCS scores and improvement in MCS scores. Comparing our findings on health-related quality of life with previous studies is challenging because of geographic diversity across the studies; moreover, relatively few studies have exclusively focused on rural older adults. We tried to compare our findings with studies targeting community-dwelling or rural older adults in various regions. Our findings were similar to those of one study of community-dwelling older adults in Korea by Kim et al. (2020) that reported a moderate average active aging score. The authors also reported on the three subdomain scores of active aging: scores on "safety" were the highest, followed by "health" and "participation" scores [43]. In contrast with our findings, Henchoz et al. (2019) found that the score of all domains of quality of life, including social and cultural life, health and mobility, and esteem and recognition, decreased in community-dwelling older adults [30]. Further, our results are not consistent with the findings of another previous study that reported that rural older adults might feel more loneliness and need for emotional support, as indicated by reporting worse emotional well-being compared with the self-report scores of urban participants [44]. These discrepancies may arise from several sources. For example, living arrangements may vary due to both cultural norms regarding filial responsibility and differences in community services across the selected countries and contexts, reflecting variations in the availability, cost, and quality of institutional care for older adults [43]. Future research should investigate the differences in health-related quality of life in older men and women between urban and rural areas. With respect to the components of physical and mental health, the average PCS score in our sample is slightly higher than that obtained in previous research [45,46], while the MCS is similar to the average value for six European countries (mean MCS score of 54.3) obtained using the SF-12 [46]. This higher PCS score in our sample may be due to participants' younger age and lower prevalence of obesity compared with the samples in prior studies. Obesity in the elderly can lead to chronic diseases and can affect daily life [47]. Further studies are needed to compare the impact of obesity on health-related quality of life across regions at the global level.
Concerning gender differences, our findings supported previous studies, which reported a poorer health-related quality of life among women than among men [48][49][50]. In the present study, male older adults were more often currently employed and living with a spouse than female older adults. In addition, female older adults had a higher prevalence of chronic diseases such as hypertension, hyperlipidemia, arthritis, and osteoporosis compared with male older adults. Working status, living arrangement, and multi-morbidities may be associated with physical performance or mobility [51]. Mobility impairments might have decreased participants' opportunities to participate in diverse social activities. To better understand these results, further investigation may be necessary to identify needs, ongoing behavior patterns, and barriers to mobility in rural older adults.
---
Changes in PCS and MCS Scores by Types of in-Person Social Activities
Importantly, we compared the changes in PCS and MCS scores of SF-12 for four years by types of in-person social activities. In our sample, "meeting friends" was the only social activity significantly associated with changes to physical and mental health-related quality of life. Ceasing to meet friends in Wave 4 was significantly associated with the largest decrease in PCS score and a small increase in MCS scores. This finding was in line with the result of a previous study which found that only informal social activities with friends were associated with increases or maintenance of life satisfaction [52]. Our result also aligns with a previous finding on the positive relationship between informal strong ties and subjective well-being [53]. In addition, this finding may support the results of previous studies which show that having multiple group memberships may lower the risk of functional disability and contribute toward maintaining mental health [54][55][56].
Interestingly, our finding revealed a greater decrease in PCS score among female participants who reported "yes + no" compared with those who reported "no + yes" or "no + no." Furthermore, the mean MCS score decreased in female participants who reported "yes + no" while the score increased in all other categories. There is a significant difference in the MCS score in the "yes + no" category and the "no + no" category among female participants. However, in men, beginning to participate in meeting friends in Wave 4 seemed to play a positive role in improving MCS scores but this improvement was not statistically significant. Lam et al. (2018) argued that participation in multiple organizations is a psychosocial resource that protects older people from threats to their health due to changes in their social identity [57]. A decline in an individual's social role due to advancing age-which is one of the main changes in social identity later in life-may lead to isolation. Therefore, formal or informal social activities may contribute to improving an individual's health-related quality of life. Future studies are needed to investigate the gender differences in longitudinal changes of quality of life according to living arrangements and types of social activities.
In the present study, over four years, overall participation in social activities increased across all activity categories-volunteering, religious activities, meeting friends, and hobbies-as did the social network sizes for both men and women. This result was inconsistent with a prior study using national data in China, which compared rural and urban older adults; in that study, urban older adults had better social activity support and reported better health status than rural older adults [58]. Moreover, these findings are in line with the argument that engaging in informal social activities is resource demanding. Older adults are, on average, not only less healthy than middle-aged adults but also have fewer cognitive and motivational resources that may enable them to get involved in activities, which require major effort [59]. Recently, one study reported that strong informal ties might increase subjective well-being among rural individuals [60]. That is, rural older adults may benefit more from informal strong ties, such as visits with friends, neighbors, or relatives, than urban older adults. This may explain why rural older adults differ from urban older adults and younger adults with particular respect to engaging in informal social activities with friends [52]. With this in mind, it is important to take into account the advantages and disadvantages of living in rural or urban areas when studying the social interactions of older adults.
---
Limitations
Our findings must be interpreted cautiously because we explored associations between change in health-related quality of life scores and change in self-reported social activity rather than examining any causal relationship between social activity and quality of life change. Four years may not be long enough to observe major change in health-related quality of life and social activities because of relatively less variability in population migration, industry, and lifestyle in the rural area. Further longitudinal studies with older adults with diverse age groups will help answer this question. In addition, regarding the questions asking about in-person social activity participation, different answer options were used between Wave 1 (i.e., "yes" or "no") and Wave 4 (i.e., 7 options, from 1 "several times per week" through 7 "not at all"). To resolve this difference, collapsing categories were necessary for Wave 4 data. Therefore, interpreting our results needs careful consideration on the potential contribution from the different answer options. In particular, it is possible that older adult participants underreported or were unaware of memory problems, which could lead to a measurement error. In the longitudinal analyses, because the selection characteristics of the participants in the follow-up interviews were clearly biased toward higher-functioning individuals, the relationships of primary interest were likely to have been attenuated. Self-reported data are subject to possible social desirability and recall bias, and solely relying on such responses might exaggerate potential relationships. Lastly, our data were from participants recruited from a single rural village in South Korea, and therefore could not represent the entire older adult population in rural South Korea or older adults in urban areas.
---
Conclusions
Our findings revealed that rural older adults who stop participating in social activities over the course of four years report a worsening quality of life compared with those who have never joined in social activities. Those people least likely to engage in social participation are likely to be the most vulnerable: those with low income or who are frail, the oldest among the elderly, and those in poor health face the most barriers to social participation. Thus, in rural areas, health professionals should be more vigilant in watching changes in living arrangements and health in older people, particularly considering the physical distance between health facilities and households in most rural communities.
For rural older adults who are able to participate in social activities, investing in a ride-share program including going to the grocery store and the doctor's office would be beneficial. In addition, the social aspects of sharing meals would be beneficial for rural older adults who live alone. Mobile and wireless technologies may offer the potential of increasing connectedness for rural older adults by overcoming social participation challenges. Future research is needed to explore the lived experience of aging among rural older adults to identify practical approaches to promote physical and social activity, which may lead to improved overall health outcomes.
---
Conflicts of Interest:
The authors declare no conflict of interest. |
L a r s e n, Jon a s a n d C h ri s t e n s e n , M a t hil d e Dis si n g 2 0 1 5. Th e u n s t a bl e live s of bi cycl e s: t h e ' u n b e c o mi n g ' of d e si g n o bj e c t s. E nvi ro n m e n t a n d Pl a n ni n g A 4 7 (4) , p p . 9 2 2-9 3 8. 1 0 . 1 0 6 8/ a 1 4 0 2 8 2 p P u blis h e r s p a g e : h t t p:// dx. doi.o r g/ 1 0. 1 0 6 8/ a 1 4 0 2 8 2 p Pl e a s e n o t e: C h a n g e s m a d e a s a r e s ul t of p u blis hi n g p r o c e s s e s s u c h a s c o py-e di ti n g, fo r m a t ti n g a n d p a g e n u m b e r s m a y n o t b e r efl e c t e d in t hi s v e r sio n. Fo r t h e d efi nitiv e v e r sio n of t hi s p u blic a tio n, pl e a s e r ef e r t o t h e p u blis h e d s o u r c e . You a r e a d vis e d t o c o n s ul t t h e p u blis h e r's v e r sio n if yo u wis h t o ci t e t hi s p a p er. This ve r sio n is b ei n g m a d e a v ail a bl e in a c c o r d a n c e wi t h p u blis h e r p olici e s. S e e h t t p://o r c a . cf. a c. u k/ p olici e s. h t ml fo r u s a g e p olici e s. Co py ri g h t a n d m o r al ri g h t s fo r p u blic a tio n s m a d e a v ail a bl e in ORCA a r e r e t ai n e d by t h e c o py ri g h t h ol d e r s . | Introduction
Cover photographs of 'mobilities books' tend to depict shiny cars, trains, and planes or people in fast (sometimes blurring) movement (Adey, 2009;Elliot and Urry, 2010;Sheller and Urry, 2004;Urry, 2007). Yet on the cover of After the Car (Dennis and Urry, 2009) there is a rusty and defective car abandoned in a field, seemingly discarded or forgotten. It captures a future state where the system of cars has collapsed and cars will rust away and 'haunt' Western cities as vacant factories do in former industrial hotspots such as Detroit and Manchester (Edensor, 2005). Rust and immobility here symbolize the fading power and allure of cars.
A more mundane and less future-oriented reading of Dennis and Urry's cover could accentuate the changing materiality and life cycle of all mobility designs. While cars enter the streets as shiny and functioning design objects they will, over time, due to use and the weather, become objects of wear and tear. This process is speeded up if they are poorly maintained and parked outside. Such designs consist of 'materials' (Ingold, 2007) that are in process-of rusting, decaying, falling apart, and becoming waste, or of being repaired or conserved [as Edensor (2011) and Strebel (2011) discuss with regards to buildings and Gregson et al (2009) concerning domestic objects]. This is not necessarily a sign of system failure (the argument in After the Car) but of 'productivity' and presence on the street. A predictable byproduct of any 'successful' mobility system is the wearing down of oncedesired, fashionable, shiny, and useful objects, as well as repair attempts to mitigate this process. Yet, we argue that mobilities scholars have largely ignored the organic materials that constitute such designs, how they break down and become waste, especially if they are not subject to repair work and maintenance work.
Finally, the cover's 'stillness' intrigues us. Mobility designs have mainly been analyzed as designs that mobilize people and cities. Mobilities book covers, as noted, depict movement.
A partial exception is the new Routledge Handbook of Mobilities (Adey et al, 2013) with its empty train wagon, seemingly garaged. It looks newly cleaned. Or perhaps it awaits repair? And yet none of the twenty-five chapters in the handbook explores parking, maintenance, or repair. In particular, cars and bicycles routinely spend far more time 'moored'-parked and spatially fixed-than on the move. The spaces and politics of parking and waiting are strangely neglected in the mobilities literature and beyond (but see Aldred and Jungnikel, 2013;Hagman, 2006;Henderson, 2009;Larsen, 2015).
The mobilities literature is not blind to 'stillness'; it has eyes for erratic 'turbulences' (Cresswell and Martin, 2012) such as abnormal weather conditions. Flooding, hurricanes, and heavy snow disrupt the ordered flows of mobilities, causing delays, accidents, and disruptions. Major turbulences include Hurricane Katrina in New Orleans in 2005 (Hannamm et al, 2006), the Icelandic ash clouds in 2010 that for a week or so shut European airspace and sent disruption rippling worldwide [see special issue of Mobilities 6(1)], and the storm off the south coast of England in 2007 that grounded a container ship on the beach (Cresswell and Martin, 2012).
Less discussed are those smaller 'turbulences' that recurrently affect everyday mobilities.
Cars and bikes, from time to time, run less smoothly and break down because of rusty chains, flat tires, defunct motors, and so on. They are immobilized and await repair work. Thus, we argue that the material life and 'stillness' of mobilities need to be analyzed. In a previous article one of us has explored designs and practices of parking bicycles in Copenhagen, Amsterdam, and New York (Larsen, 2015). Two striking features of that fieldwork were, firstly, the sheer number of parked bikes on pavements and, secondly, that many of them were rusting, broken, stripped, and vandalized. In the present paper we document and analyze ethnographically such unstable, neglected, and 'half-dead' bikes in Copenhagen, as we encounter them in racks, on the pavement, and when the municipality attempts to clear them out. We are inspired by Aldred and Jungnikel's observation regarding English cites: "a common theme was concern about the bicycle when not in use. Bicycles at rest were perceived as threatened or threatening, risky or at-risk; affected by theft, vandalism, the weather, official and familial disapproval" (2013, page 609). Based on our field study in procycling Copenhagen, we add that parked bicycles are endangered by, and become waste because of, a lack of, and interest in, professional and DIY repair and maintenance as much as theft and vandalism. This is a general problem in Copenhagen, although it is most apparent in densely populated neighborhoods with many smaller flats and younger people.
In doing so, this paper offers new insights about the unstable materials and lives of bicycles as everyday objects. Cycling is normally conceived of as a sustainable and environmentally friendly practice (eg, Banister, 2008;Horton, 2006) but this study shows that many bikes are ill treated and quickly become waste, and 'matter out place' (Douglas, 2013) on the pavements. In what follows, we begin by discussing relevant research about the lives of consumer objects, waste, and maintenance work. This informs our 'ethnographic' vignettes from Copenhagen on its many mistreated, vandalized, and seemingly forgotten bicycles. So, this paper largely excludes well-maintained and expensive bikes parked inside flats and basements [see Bradtberg and Larsen (2014) for an account of such bicycles in Copenhagen].
---
The lives of consumer objects
We are theoretically informed by ideas that see consumer objects as 'becoming' and having a social and material life beyond their initial production and sale (Gregson and Crewe, 2003;Gregson et al, 2009). They have a social life as and when (different) people invest identity and emotions in them over time, especially when they are bought, used regularly, and later discarded. Some discarded objects gain a new lease of life if recycled, passed on to friends, or sold at charity shops, at flea markets, or on the Internet (Gregson and Crewe, 2003). They have a material life as they break down and stop working. They are unstable and aging: prone to scratches, general deterioration, and becoming obsolete. Crang calls this "the negative unbecoming of things" (2012, page 60). Design objects are not things, but a complex assemblage of many separate materials (Ingold, 2007) skilfully 'assemblaged' as a unified design (Gregson et al, 2010, page 848; see also Edensor, 2011). Bicycles, for example, are made from numerous materials, including plastic, iron and steel, and rubber, and consist of countless components such as frames, saddles, wheels, seat posts, handlebar grips, head tubes, brakes, spokes, hubs, rims, tires, seat stays, chains, front derailleurs, chain rings, chains, pedals, crank arms, cogsets, rear derailleurs, mudguards baskets, locks, lights, and much more. However, when living 'rough on the street' the materials of assemblages are constantly '(un)becoming' as elements are broken or stolen. This reading of objects as assemblages implies that, over time, design objects such as bikes can mutate and become 'something else': when, for instance, existing parts are broken or replaced. Assemblages, as Edensor points out, are: "never stable, closed and secure 'black boxes'. Although the constituent elements of a heterogeneous assemblage are enrolled to stabilize and order space and materiality, they are susceptible to entropy and disordering" (2011, pages 238-239). This unbecoming is also seen when waste is discarded and the very materials of things are scrapped and recycled. Such a theoretical framework allows us to take bicycles seriously as material objects and explore their complex life-biographies:
(1) as energy-intensive and polluting consumer objects that are designed in the West, and produced in China or elsewhere, shipped to the West and sold as shiny commodities in 'local' shops;
(2) as everyday objects that are used, maintained, and repaired, but also subject to wear and tear, neglect, vandalism, and theft;
(3) as discarded, rubbished, and stripped objects living neglected lives in racks and streets;
(4) as 'disorderly' waste-objects that require costly removal by municipality maintenance staff and later scraping;
(5) as second-hand gifts passed on to friends and family, or charity, or sold cheaply at carboot sales and charity shops or expensively at high-end second-hand shops and markets;
(6) as stolen goods when bikes, or parts of bikes, are stolen for private use, resale, or in the assemblage of new bikes;
(7) as discarded and deassemblaged materials that are scrapped and reused in the production of new designs.
---
Waste
The above list highlights ways in which objects can become discarded and 'wasted'.
Research increasingly highlights how mobilities are great producers of pollution (Urry, 2011) and waste (for an overview, see Swanton, 2013). This can be illustrated with virtual mobilities (Urry, 2007)-eg, e-mails and photo calls-that are not as 'sustainable' as first imagined. Electronic waste (e-waste), as Graham and Thrift write, "is the fastest growing segment of the overall waste stream" (2007, page 19). Mobile phones, computers, etc are resource intensive to produce and many kilos of hidden resources go into their production.
They depend upon scarce minerals located in conflict-ridden countries like the Congo where they have fuelled war crimes and violations of human rights (Sutherland, 2011). They are produced-like most other consumer objects-in 'distant places' in the East and their transportation depends upon massive container ships, which consume a great deal of fuel (Urry, 2014).
The Internet and 'gadgets' consume electricity and generate mountains of e-waste, which again consume energy. Information technology is responsible for roughly the same amount of global CO2 emissions as all the aircraft companies combined: namely, around 2% (http:// www.information.dk/282929). To make things worse, they are used only for short durations-less than a year for mobiles (Graham and Thrift, 2007, page 19;Gabrys, 2011).
Consumer objects are increasingly designed to have short lives so that new designs can be purchased. They break easily, foreclose repair, and update poorly while a constant stream of new models makes existing ones appear outdated and unfashionable almost overnight (Graham and Thrift, 2007, page 18). This is "planned obsolescence" (Gregson et al, 2007, page 697; see also Cooper, 2005, page 57;Slade, 2007), and is linked to the intensified production of consumer desires, of a speeded-up capitalist postmodernity (Bauman, 2007;Harvey, 1989;Lewis, 2013) where we are "victims of the morbid cycle of repetition, novelty and death" (Edensor, 2005, page 315). As Bauman writes: " The society of consumers devalues durability, equating the 'old' with being 'outdated', unfit for further use and destined for the rubbish tip. It is by the high rate of waste, and by shortening the time distance between the sprouting and the fading of desire, that subjectivity fetishism is kept alive and credible despite the endless series of disappointments that it causes. The society of consumers is unthinkable without a thriving wastedisposable industry. Consumers are not expected to swear loyalty to the objects they obtain with the intention to consume" (2007, page 21, our italics).
What Bauman calls "waste-disposable industry" is discussed by geographers such as Moore (2012), Davies (2012), Crang (2010), and Gregson et al (2010); they explore the material and economic afterlife of waste across different spatialities. Waste is not necessarily final and fixed (Gregson et al, 2010, page 848). For instance, Gregson et al (2010) 'follow ethnographically' 'end-of-life' containerships that are sailed to beaches in Bangladesh where they are 'disassembled' or 'unmade': things deemed valuable are recycled in local furniture businesses while the ships themselves are scrapped and sold as steel scrap (Crang, 2010;Gregson et al, 2010). This is part of a wider 'offshoring' where waste is both a burden and an economic resource (Graham and Thrift, 2007;Urry, 2014). Even the abandoned car discussed in the introduction will probably be scrapped to extract the steel and aluminum and be recycled in new objects [see Moore (2012) for a general review of different geographical approaches to understanding waste].
However, discarded or unwanted Western consumer goods also gain new economic vitality and lease of life when they are passed on to friends and family members (for example, children's clothes that are outgrown before they are outworn) or sold as second-hand goods in charity or vintage shops. Studies of second-hand cultures show how things can move in and out of a commodity state through their lives (Gregson and Crewe, 2003;Gregson et al, 2007;Hetherington, 2004;Parsons, 2007;Thompson, 1979). Contrary to the idea of a 'throwaway society', many things are recycled and kept, as 'throwing out' useful things is considered, at least by some, as amoral and wasteful. And it can be heart breaking to discard objects or 'home possessions' (Miller, 2001) of affection; no matter how outdated (the rest of the world may consider them). In their research, Gregson et al find that: "whilst people certainly did get rid of consumer objects via the waste stream, they also went to considerable lengths to pass things on, hand them around, and sell them, and just as often quietly forgot about them, letting them linger around in backstage areas such as garages, lofts, sheds, and cellars, as well as in cupboards and drawers" (2007, page 683). So many 'outdated' objects are not properly binned but live half-forgotten and semiwasted lives back-stage. Yet they may be rescued from binning by lingering back-stage. This may be the case with large vinyl collections and photo albums that are 'out of place' in many 'digital family homes' (Larsen, 2014;Larsen and Sandbye, 2014;Reynolds, 2011). People are not only throwing out 'waste', but also living with their own and especially with other household members' semidiscarded objects that cause clutter and dust.
---
Maintenance work
Another 'waste-disposable industry' is maintenance work. Graham and Thrift (2007) critique social theory for focusing upon 'systems' that 'work'. Maintenance and mending are equally crucial because systems and things decay, break, and fail: "All infrastructural systems are prone to error and neglect and breakage and failure, whether as a result of erosion or decay or vandalism or even sabotage. Indeed, many such systems are premised on a certain degree of error or neglect or breakage or failure as a normal condition of their existence" (page 5).
Graham and Thrift note in particular how things and cities, day in and day out, decay a little, being exposed to all sorts of human and nonhuman practices, pollution, wildlife, and weather.
Things are always becoming and in the process of decaying. If unmaintained they will, sooner or later, become waste, as Edensor (2005) shows with regards to industrial ruins. So, Graham and Thrift argue that: "the world is involved in a continuous dying that can only be fended off by constant repair and maintenance" (2007, page 6). Edensor notes how a building, with reference to an old church, is "simultaneously destroyed and altered by numerous agencies, and stabilised by repair and replacement building material" (2011, page 243). The church is constantly in a process as various humans and nonhuman agents-for instance, the weather and insects-act upon its stony fabric. Without careful and meticulous maintenance and repair, the church would deteriorate-like the much younger industrial sites and the car in the field. While set in stone, a church requires nursing to remain alive.
Maintenance and repair work are thus part of an ordering project aimed at maintaining designs at their peak and clearing them off when deemed waste. Once defined as waste, designs become 'matter out of place' that undermine order. Decaying industrial sites are, when seen through the prism of order, 'disorderly ruins' (Edensor, 2005). Social, spatial, and material order requires continual maintenance (Edensor, 2005, page 313;Hetherington, 2004, page 159).
So far we have discussed maintenance and repair of 'big designs'. What about everyday designs? Professional repair is in decline because of high labor costs; the cost of repair sometimes exceeds the price of a new object/model (Gregson et al, 2009). DIY repair is probably also declining because many designs foreclose repair. Yet Gregson et al argue that "in a very real sense object maintenance drives the consumer world, much as Graham and Thrift (2007) have argued that it constitutes the city" (2009, page 268). They examine the repair and especially maintenance work that individuals perform when dusting, vacuuming, and cleaning cherished home possessions: "these practices endeavor either to keep consumer objects in or return them to their pristine state (as when new), to freeze the physical life of things at the point of acquisition and to mask the trace of consumption in the object" (2009, page 5). Maintenance and repair work can also be a source of improvization and innovation (Graham and Thrift, 2007). This is seen when people or professionals upgrade designs or restore vintage designs. Such DIY repair requires 'competences' (Watson and Shove, 2008) and interest in doing such work.
Inspired by the discussions above, in what follows we explore the social and material life of bikes in Copenhagen. In other words, how are 'parked' bikes treated in this iconic cycling city? We are particularly interested in examining how Copenhageners treat their own and others' bikes, by attending directly to the very 'materials' of bikes and their assemblages, as well as their constant 'becomings' and 'unbecomings'. Our analysis is based on a twelvemonth (from 2 February 2013 to 2 February 2014) ethnographic study comprised of observations, visual documentation, and interviews with cyclists and municipality staff. The study took place in Vesterbro, a gentrified residential neighborhood where cycling is very common and also where the central main train station is situated. We focused our study on bicycle parking areas located just outside the main train station as well as on eight smaller residential streets, some of which have smaller shops and supermarkets, in two different parts of the neighborhood. On each of these streets, we observed, filmed, and photographed bicycles, and in particular their vital parts and materials, oiled and dried-out chains, rock hard and flat tires, rusty parts, and broken and missing bits.
These observations are complemented by thirty-five short interviews with 'ordinary cyclists' at parking racks at the main train station and a nearby supermarket. The interviewees were recruited on the streets as they parked their bikes, and were offered anonymity as part of their participation. The interviews revolved around bike ownership, emotional attachment to bikes, and practices and competences of maintenance and repair.
Following this discussion is a vignette that illustrates how the Municipality of Copenhagen regards the many half-dead bikes as 'disorderly waste' and undertakes laborious maintenance work to remove them. Inspired by ethnographies of things-as-materials and 'waste disposable industries' (Gregson et al, 2010;Lane, 2014), we 'follow bikes' as the municipality removes them from the streets of Vesterbro. This takes us 'down the value chain' to the scrap sites where most of these bikes end their lives as scrap metal and 'up the value chain' or 'waste hierarchy' (Gregson et al, 2013) at police auctions where the most valuable bikes are auctioned. We have interviewed and exchanged e-mails with the municipality officers in charge of this operation and have on two occasions 'traveled along' with the municipality team as they taped, removed, and disposed of bikes from the streets of Copenhagen. The municipality has provided us with statistics about how many bikes they collected, recycled, and scrapped from 2008 to 2012.
---
Bikes in Copenhagen
Small bike shops selling and repairing bikes are everywhere in Copenhagen, with less than 150 m between them in one particular neighborhood (Nordstrøm, 2013, page 48). Many larger supermarkets sell cheap bicycles. They all sell a mix of Danish and international brands but they all have considerable 'carbon footprints'. A mere 2800 bikes were produced in Denmark 2012 (by companies with ten or more employees) (Rühne, 2013) . This compares with 105 000 in 2007, which reflects a global trend with most bikes being produced in China or other low-wage countries (http://www.worldwatch.org/node/5462; Vivanco, 2013, pages 46-47). There are no available statistics about the average price of bikes sold in Copenhagen, but they seem to cost between £350 and £600, less in supermarkets and more in racer cycle shops. By Danish standards, this is not a fortune, especially compared with cars that easily cost around twenty to thirty times as much (new cars are heavily taxed in Denmark).
There is a striking contrast between observing-especially parked-bicycles in bike shops and out on the streets of Copenhagen. Their shiny newness, of polished surfaces and intact pieces, evaporate and mutate into rust, scratches, dirt, and missing parts. This 'deassembling' even characterizes newish-looking bikes. Premature ageing haunts surprisingly many parked bikes in Copenhagen. They look, generally speaking, neglected compared with the neighboring cars, so neatly parked and maintained. Many bikes have ingrained dirt, dried-out or sloppy chains, rusty parts, scratches, semiflat tires, and missing, broken, or bent parts.
Passers-by use bike baskets, especially on trashy bikes, as garbage dumps. If cigarette packets, cans, and greasy junk food paper are not removed immediately, the bike will soon mutate into a garbage tip. Few seem to care much about their own or others' bikes.
This woeful state of bikes in Copenhagen is partly the result of cheap materials that easily rust and break. But they are also systematically mistreated and vandalized by various human and nonhuman 'agencies' that act upon the very materials of bicycle assemblages. One such 'agency' is insufficient and poor parking. Cycle racks are designed to produce an orderly space, with rows of bikes neatly placed next to each other. Yet every so often chaos reigns. One inherent design problem with grid racks (the rack type in Copenhagen) is insufficient wheel attachment and support. This causes 'turbulence', with bikes falling when touched by the wind or a parking cyclist. This triggers further turbulence-a domino effect-where one falling bike takes down several others. A notorious shortage of grid racks in Copenhagen only aggravates this problem. Bikes are bent, broken, and scratched (see figure 1). The widespread use of stand props also causes 'turbulence'.
They conveniently provide ubiquitous parking without leaning against, or being supported by, a rack or street furniture, right at the destination. The downside is that they are easily tilted and knocked over. Walking often implies avoiding fallen bikes [for more on this, see Larsen (2015)].
The weather is another destructive nonhuman 'agency'. Rain and snow (as well as antifreezing road salt) cause rust, and both weather conditions prevail in Copenhagen. Visible rust outbreaks are seen on most metal parts of parked bikes, eating them from outside in, especially in all the scratches.
Theft is a negative human 'agency'. Stripped bikes, without pedals, or gears, or wheels, or handlebars, or even frames, haunt most bicycle racks and they are always at the mercy of a new round of bike 'vultures'. Locks and wheels are likely to be damaged during theft and a bike is 'thrown somewhere' when the opportunistic thief no longer needs it or feels remorse.
Bike theft is much more common than car theft (see Larsen, 2015) and almost all the interviewees had experienced it. Indeed, this was the major reason for buying (or inheriting) their present bike. Their old bikes were not worn out or needing replacement. The risk of theft discourages/d the interviewees from investing in high-quality bikes and maintaining and becoming attached to their present bikes (see below).
Another destructive human 'agency' is that of treating pavements and racks as spaces of storage and refuse. Garbage is never placed on the pavements in Denmark [in contrast to many cities: for instance, Melbourne, where there are weekly kerb-side collections from bins (see Lane, 2011, page 398)]. Garbage belongs in designated back-stage areas (eg, courtyards), out of sight and smell. Yet many parked bikes seem to be 'forgotten' or misplaced by their owners, literally in the process of decaying and becoming waste. Perhaps people are not quite ready to bin their (stripped) bike or they cannot be bothered to throw it away properly. They linger on the street similar to, as pointed out by Gregson et al, semiforgotten things in garages, lofts, sheds, and cellars (2007, page 683).
Arguably, however, the major destructive 'agency' is lack of maintenance routines and skills.
Our interviewees put hardly any effort or pride into bike maintenance. Several said that maintenance would only make theft more attractive [see similar findings by Aldred and Jungnickel (2013)]. The interviewees did not talk about practices of polishing and washing bikes (as Gregson et al discovered in their study of domestic objects) or DIY and craft skills, as Watson and Shove (2008) noticed in relation to 'home improvement'. And there are no 'systems for bike maintenance' such as 'car washes' at petrol stations, nor compulsory MOT tests for ageing cars.
Few talked about having basic DIY bike repair skills. Most visit a bike shop around the corner for smaller repairs, such as a flat tire. Given the high cost of such repair work (the hourly rate is around £35), few invest much in repair in a cheap shabby bike when a new bike is only marginally more expensive. However, according to interviewees, the lure is not the latest version but a rather a new version of the same or a similar bike. Bikes are in this respect less victims to 'planned obsolescence' than are mobiles and laptops. As one middleaged woman said when she was asked what was wrong with her old bike since she had bought a new one: "It got stolen. I had the same model before but it got stolen, so I replaced with the same model." Many interviewees have detached relationships with their bikes, which are just bikes that might be 'wounded' or even gone by tomorrow. Little money, emotion, and maintenance are invested in such 'ordinary' bikes. As one woman said: "I have a bicycle from Kvickly [a Danish supermarket]. Because I got tired of having bikes stolen all the time. They were much nicer." Another stated that she had bought her present bike "because it was cheap" so "that it wouldn't get stolen". They are means of transport and have little identity or lifestyle value because the wider environment makes it difficult to develop such a relationship. This is in contrast with many 'home possessions' that are much easier to protect and therefore develop affection for over time (Gregson et al, 2013).
All these 'agents', in combination, cause an ongoing stream of small-scale turbulences and spatial disorder that produce abandoned, immobile, and ownerless bicycle in great numbers.
Moreover, they illuminate their unstable nature. These agencies destabilize and mutate bicycle into discrete materials-even before they are scrapped. Observing such not-yet-endof-life bicycles show that they "are not just singular objects but simultaneously multiple, heterogeneous things and materials" (Gregson et al, 2010, page 847).
The 650000 bicycles in Copenhagen means that a great deal of parking space is needed, and streets and racks brimming with bikes are the reality today (Larsen, 2015). This overcrowding represents a practical planning problem of 'waste and matter out of place' (Moore, 2012, page 786). Abandoned, immobile, and ownerless bikes disturb the smooth running of things (Moore, 2012, page 781). The Copenhagen Municipality estimates 'conservatively' (based upon the sale and theft statistics) that 40 000 bicycles are abandoned every year on the streets, courtyards, and racks (Nielsen, 2012). The bikes also, to some degree, become objects of irritation amongst cyclists and pedestrians, blocking pavements and exits.
---
Bikes also mobilize irritation because they ruin the clean image of both cycling and
Copenhagen. Abandoned bikes may not smell and be a health hazard, but they look trashy and disorderly. Trashy bikes on pavements are simultaneously 'in place' and 'out of place'similar to, say, out-of-date food in a fridge. They are wasting 'loudly' on their own 'frontstage' (recall Gregson et al above). Abandoned bikes are a (visual) waste problem and a space problem, calling forth maintenance work, according to the municipality. This reflects more broadly that waste is 'political' and "a becoming process between matter-out-of-place and matter-in-place" (Pikner and Jauhianien, 2014, page 47).
We turn now to how the municipality collects 'dead bikes' in practice. The process is similar to the manner in which bikes are collected in train stations and 'from below' by private flat owners in the communal spaces of apartment buildings (eg, courtyards and basements). In short, everyone must follow the police guidelines surrounding the collection of abandoned bicycles [for another study of collective clearing out 'from below', see Pikner and Jauhiainen (2014)].
---
Maintenance work
The municipality spends, according to the municipality officer responsible, some £240000 annually on a special bike refuse unit (e-mail, 16 March 2013, and telephone conversation 22 May 2014), which is separated from other garbage and maintenance duties (eg, emptying public waste bins and sweeping the streets). Between 2008 and 2012 the municipality has collected some 6426 bicycles yearly on average, according to its own spreadsheets (our calculations based on the municipality's own statistics). In addition to the municipality, the train company DSB and the police collect 16 000 bikes a year (Nielsen, 2012). There is thus a steady flow of new bike waste. The municipality estimates that 10-12% of all bikes found in public are abandoned (Nielsen, 2012).
Yet the cultural and legal status of the bike in Copenhagen makes this maintenance work difficult, time consuming, and ineffective. First, as argued, many bikes in Copenhagen are marked by wear and tear, lacking repair and maintenance. It is difficult to detect whether a bike is abandoned or not. The bikes do not reveal their 'social biography'. Second, by law, bikes are 'untouchable'. It is illegal for shop owners and others to remove bikes even if they block shop windows or façades (Larsen, 2015). Not even the municipality is allowed to remove a fly-parked bike from, say, the pavement to a nearby rack or to bin a rusty bike.
However, from March 2013 it was permitted, on a trial basis, for authorized municipality personnel to remove bikes to a nearby place if the bikes were blocking emergency routes and passages. Yet the municipality is still obliged to inform the owners where they can pick up their bikes.
Bikes are largely protected and 'free-riding' objects on the pavement. This is unlike most other abandoned objects that are subject to immediate removal. Not even cars have the same protection as bikes. Car parking is strictly regulated through (sometimes considerable) payment, time zones, parking signs, parking meters, traffic wardens, and rules about parking and fines (Hagnam, 2006;Henderson, 2009). Cars with no or invalid tickets will be fined and eventually clamped. The owner can be tracked down due to the personalized registration of cars (eg, number plates). In contrast, cycle parking is unregulated and free of charge, and bike ownership is not registered (Larsen, 2015). Although, by law, all bikes sold in Denmark have to have a unique serial number engraved into the frame (and this number has to be stated on the receipt), the bike is not registered with the authorities. A '(littering) bike' cannot be traced to the owner. How, then, does the municipality go about cleaning-out and recycling 'dead' bikes? Analytically, we divide the work into: preparation, identification, separation, removal, and afterlife.
---
Preparation
Maintenance work involves planning. The renovation team's long-term planning involves ensuring that they systematically 'clear' exposed locations such as train stations every few months, and different neighborhoods yearly or every second year. In between it does more ad hoc collections at notorious spots. We focus here on the planned, systematic clearance. Detailed maps divide neighborhoods into manageable areas of some twenty streets. These units are then given a timeframe of five weeks to be cleared. When an area is next on the schedule, an employee cycles to the area (on an electric bicycle) and hangs posters (see figure 2) on all the doorways. These inform residents that: " This street will be cleaned of dumped bikes and the remains of bikes ... . Copenhagen Municipality would like to make an effort so that your neighborhood appears at its best-as a cosy and tidy place. Therefore, we will clear out bikes in the area marked on the map."
The poster describes how all bikes will be taped with yellow tape from a specific date.
Residents are asked to remove the tape from their-or their holidaying neighbours'-bike if still in use.
The municipality promises residents that the maintenance work will have a positive outcome for their neighborhood; it will become a 'cosy and tidy', ordered place. However, the poster says nothing about 'recycling' and it is evident that such bikes are not regarded as a potential resource. We also see that the municipality distinguishes between and targets different bike assemblages as part of the garbage work: namely, 'dumped bikes' (ie, without an owner, because they have been abandoned by their owner or separated from them due to theft) and 'bike remains' (eg, broken or 'stripped'). The former can, in principle, be a new and working bike, which does not look like trash. Yet it makes sense to include the former, because, as discussed above, many working consumer objects are prematurely discarded. Moreover, many stolen bikes are not sold but dumped when no longer needed. This procedure reflects the fact that the law states that everyone (except the police) needs to give four weeks' notice before they can legally remove bikes. It also reflects a concern within the municipality, we are told by refuse collectors, that disposing of bikes that are still being used and valued by the rightful owners jeopardizes the legitimacy of the project (field notes, 29 May 2013, 9 July 2013). It would, so to say, turn the municipality into a thief. The posters and tape minimize that risk.
---
Identification
The employee indiscriminately fastens yellow tape around every single bike, one after another. A 200-300 m long street can easily hold several hundred bikes and to tape each of them takes hours. Staff bend a little forward and tape together one of the wheels and mudguard (standard equipment on Danish bikes) with some narrow yellow tapes with the municipality's logo (see figure 3). This is a clever design that automatically unseals when used, independently of whether the owner notices it or not. If the tape is broken, the bike will be regarded as 'in use', whereas intact tape is the indicator of bikes-to-be-wasted. This is taken as a sign that it is either immobile (eg, not in use) or/and ownerless. The waste status of these bikes is a spatial-temporal one: they are still in the same place for more than four weeks. This challenges ideas that disposal and waste are purely spatial categories (Hetherington, 2004;Parsons, 2007, page 391).
---
The in-between period
Slowly, during the four weeks, there are fewer and fewer taped bikes in the streets. This is according to the script if they are released through use. However, the tape can break against the 'script'. Some may remove the tape from their stationary nonworking bikes because they plan to repair them one day. Youngsters may remove the tape as a prank. Therefore immobile bikes will most likely still occupy sought-after space after clearing. At the end of the period, a taped bike may encourage theft, especially those bikes that are already unlocked due to recent theft. Some may find such theft legitimate since the bike is going to be scrapped anyway: in fact, they keep the bike alive by stealing it or some vital 'organs'. One of us, for example, fell in love with a beautiful 1970s European racer that with a good dose of repair and some new parts could become a real vintage beauty. Having been taped for four weeks it was destined for scrapping. The night before clearance it disappeared. This reflects the fact that vintage bikes from the 1970s and 1980s are fashionable amongst cool young people (Haddon, 2012;Weis, 2013). Clearly, this was not the only soon-to-be-discarded bike that was scavenged during those four weeks. This is despite the fact that it is illegal and might be stolen property (the engraved steel number is seldom 'unbecoming'). Lane's (2011) study of hard rubbish collections on pavements in Melbourne showed that 35% of stuff was scavenged prior to collection by the municipality. However, while the Melbourne residents were a little unsure if the practice was actually legal or not, pavements and nature strips are regarded a legitimate place to acquire stuff during the announced period and there was no fear of acquiring stolen property or committing a crime (Lane, 2011).
On the announced day of the actual clearing, five weeks after the notice, most tapes will have been unsealed. To our surprise: "the tape had disappeared on many, many poor and even 'scrap-worthy' looking bikes" (field notes, 29 May 2013).
---
Separation and removal
After five weeks the actual disposable work can begin. This phase can be further divided into a recycle and scrapping phase. Based on our observations as we traveled with the refuse team one of us wrote: " First, employees trained in bike mechanics walk the streets, assess and pick those bikes with the recycle exchange potential of making at least £60 at a later auction. These selected bikes are registered manually in a digital tablet system. Only functioning and intact bikes (of known brands) are selected, while valuable and functioning parts are overlooked. We are told that it is not worth picking less valuable bikes or parts due to the expenses involved in handling them" (field notes, 29 May 2013).
Most bikes are not worth around £60 after 'living rough' on the streets of Copenhagen. We observed that most bikes to be scrapped were inexpensive supermarket bikes or middlerange brands, victims of what appeared like a few years of wear and tear. Even fairly new (especially cheaper) supermarket bikes were snubbed, not deemed worthy of 'moving up' the value chain, unless in perfect condition. Moreover, many functioning parts that could easily become 'vital cogs' in a new bike assemblage are relentlessly overlooked and head to the 'scrap graveyard'. The 'chosen few' deemed fit for reuse are then driven to a storage facility where are they are immobilized for another four weeks of waiting (see later).
All the remaining bikes are indiscriminately destined for scrapping. We were initially surprised to realise that the vast majority are considered nothing but waste. Clearly many of them are in very poor condition but equally many are little or no worse than many in-use bikes. With a little maintenance and repair many would even be in a better state than most other street-residing bikes. Furthermore, our observations suggest that few of these bikes are unlocked or appear to have been stolen, based on the state of their O-locks [the common lock in Copenhagen (see Larsen, 2015)]. This suggests that many abandoned bikes are neglected and forgotten rather than stolen goods.
Then an employee registers all the taped bikes in a tablet. This is partly to keep track of the overall work and partly to organize the forthcoming removal. Then the renovation team arrives on one of the following days with a lorry to pick up the many registered bikes, taking one street after another. The diary extract below gives a sense of how clearing takes place at a small residential street where most bikes are parked with stand props or by leaning against house façades: "The tablet states that on the next street there are a handful of bikes awaiting removal. Just enough to fill the load! The driver navigates the one-way streets; turns the corner and scouts after bright yellow tape, signalling us to stop. Five bikes stacked together are spotted. This is a few less than the tablet says. They must have gone missing! They are loaded onto the back of the lorry in a matter of minutes. And this is despite the fact that they are all locked! The many bikes with O-locks are simply lifted by the back wheel and rolled onto the lorry. These locks are so easily undone by a human hand. The few bikes that are 'moored' to something with quality u-locks or chains, it turns out, are almost equally impotent when facing a professional bolt cutter. They are lifted within seconds! I count twenty-five bikes on the lorry now. Then we inspect the whole street by foot to make sure that we do not miss some.
Another four bikes are found and they are squashed in together with twenty-five other bikes.
The pick-up request is 'cancelled' on the tablet and it is updated with the number of bikes collected. We leave the street and drive directly to the scrapyard" (field notes, 29 May 2013).
Clearly, this work has some 'cleansing' effect. When we inspected the streets some days later the pavements look less messy, crowded and haunted by trashy bikes, although it has not entirely eradicated the problem. Interestingly, there was less general garbage as the municipalities sweeping machines could better traverse these streets.
---
Afterlife
As mentioned, the collected bikes' destiny is recycling or scrapping. The 'unlucky' ones are driven directly to the scrapyard on the outskirts of Copenhagen where: "they are 'brutally' smashed together with cars and other metal waste: brutally, as they are not examined for valuable and reusable parts (field notes, 29 May 2013) (as is the case with ship wrecks in Bangladesh, see above) (see figure 4). These bikes end their lives as mixed scrap that is shipped abroad and remelted in countries such as Turkey, Vietnam, and India, the head of the scrapyard tells us (field notes, 29 May 2013). The payment that the municipality receives in return-around £12000-is symbolic and covers only of 4.5% of the expenses connected with the clearance (e-mail correspondence with municipality officer, 15 March 2013). This is despite the fact that 97.5% of the collected bikes are turned to scrap metal and only 2.5% are reused (our calculations based upon the municipality's statistics). The 'to-be' recycled bikes have to wait another four weeks before they can be sold at the police auction. This is in case a rightful owner calls for his or her bike and to check whether any has been reported stolen. If a bike has been stolen, it is reunited with its owner or handed over to the insurance company if the insurance payout has already been made. The unstolen and unclaimed bikes are then sent to popular police auctions open to members of the public hunting for a good bargain. Second-hand cultures are often driven by thrift, by more rather than less consumption (according to Gregson et al, 2013). At the auction we attended, a few hundred bikes were sold at an average price of £88 (only a few did not get any offers) bought by private buyers for their use-value and by bike shop owners for their exchange-value. After a bit of repair work, the bikes will be resold and thereby reinjected into another cycle of exchange and use on the very streets from which they had previously been removed.
---
Conclusion
The Copenhagen bicycle system is applauded worldwide for high-quality elevated cycling lanes. This pro-cycling city has more kilometers of cycling lanes (relative to its population) than any other big city (Buehler and Pucher, 2012, pages 292-294;Larsen, 2014a). Moreover, they are well maintained, and not plagued by potholes and vanishing painted lines [as is common in New York and London (see Larsen, 2014a)]. Yet as Graham and Thrift (2007) argue, even well-functioning systems are prone to errors and neglect. In this paper we have argued that the bicycle system in Copenhagen, because of its many neglected, halfdead, abandoned, and wasted bikes, is no exception. We argue that, generally speaking, many bikes in Copenhagen are treated as inferior and disposable objects that 'live rough' on the streets. Consumer objects are always unstable and unbecoming. Yet this process is speeded up in Copenhagen with regards to bicycles. This is caused by a combination of inadequate urban design, lack of maintenance and repair, weather conditions, and cheap bikes with poor materials. A vicious combination of poor parking design and theft discourages investment in quality bikes and becoming attached to them. This means less cycling. Neglected bikes do not afford a smooth, fast, or long ride. This in part explains why longer commuter journeys to and from Copenhagen are rare (Bradtberg and Larsen, 2014). Cycling, more usually applauded as a sustainable form of transport, can also cause CO2 emissions, clutter in the streets, and waste. This, we have argued, is not because bikes are victims of the whims of fashion and planned obsolescence (as with many other consumer objects) but rather because many people ride cheap bikes that are insufficiently maintained and repaired, and often replaced with a new cheap model within a few years.
We have paid particular attention to the municipality's attempted ordering of the perceived disorder and waste around bikes. This maintenance work is slow, challenging, and expensive.
Order is never maintained for long. Stolen and half-dead bikes quickly haunt the stands again-these are bikes that are going nowhere and are woefully heading towards the garbage tip. As Edensor says more generally: "Systems of disposal are rarely perfect and matter is often more difficult to eradicate than is imagined" (2005, page 836). Very few of the collected bikes or 'bike parts' are recycled. There is more concern with getting rid of dead bikes than with 'giving first aid' and saving functioning 'organs'. This is in part because the law prevents people from scavenging unclaimed bikes and the municipality from giving away the collected bikes.
We end by proposing that a truly cycling-supportive and sustainable city should not only be known for its cycling-friendly environment (cycling lanes, speed reduction for cars, few bicycle casualties, and so on) but also by adequate parking, little bike theft, good-quality bikes, second-hand and vintage bike cultures, DIY repair and maintenance skills, and affectionate bike owners who treat their own and each others' bikes with respect. In this light,
Copenhagen is not yet a truly great cycling city. |
This research describes the types of impoliteness strategies and emotional expressions in impoliteness used by haters in commenting on Instagram social media. Many impolite stattements were deployed by haters of FIFA because this football association is considered unfair in making decisions and treating Israel and Russia differently. Descriptive qualitative method was employed to analyze the research data. The present research took data from haters' comments in the comment column of the @fifa Instagram account. The data analysis included identifying and describing the use of impoliteness strategies and emotional expressions involved the impoliteness. The research has revealed that some types of impoliteness strategies were used by the haters with Bald on record impoliteness being the most dominant strategy and anger was the most dominant emotional expression involved in the impoliteness. | Introduction
One thing that has an important role in communication is language and with language one can communicate with one another. People use language to communicate either directly or indirectly. Nowadays, people can not only communicate directly or face to face with other people they communicate with, but can also communicate in cyberspace or through social media such as WhatsApp, Instagram, Twitter, Facebook, Line, etc. According to Erza (2018), the way of communication through social networks has become a phenomenon recently, and one of the most popular social networks is Instagram.
Instagram provides various kinds of features that can allow users to post their uploads and provide comments on other users' uploads. According to Blair and Serafini (2014), Instagram is a social network based around sharing pictures and fifteen-second second videos which can be posted to other social media sites. With Instagram Reels, users could upload longer videos. In commenting on uploads to an account, Instagram users can use language freely either polite or impolite language because Instagram has no feature that can filter comments given by its users, even though today users could set a warning message in their post for other people to use proper language when commenting. Instagram users commonly follow accounts according to their interests. Through this social media they can exchange information, posting pictures and short videos, following celebrities or influencers, promoting products, sharing informative or education contents, and many others. Followers commonly give comments to express their admiration, like, dislike and disappointment towards an account they follow. Sometimes they find mistakes and take out their frustration on Instagram accounts through comments. However, not all expressions used by instragram users are appropriately polite. They leave inappropriate comments on the account that is "Impoliteness".
This study distinguised between rudeness and impoliteness. The former is the use of offensive language that is unacceptable in a particular social contexts. The latter is a language phenomenon that commonly contains a negative meaning because it is intentionally used to attack the targeted person or institution. The study of impoliteness was pioneered by Culpeper in 1996 in which he examined impoliteness used to attack army recruits. Culpeper (1996) proposed impoliteness strategies as the opposite direction to Brown and Levinson's (1985) politeness. In another study Culpeper (2011) defines impoliteness as a mental attitude held by a participant that is comprised of negative evaluative beliefs about specific behaviour in a specific social context, as well as the activation of that attitude by those specific social in context behaviour. According to Culpeper (2011), impoliteness is a disapproving attitude toward particular actions taking place in particular circumstances. Expectations, goals, and/or beliefs about social organization, namely how one person's or a group's identities are mediated by others through interaction, serve as the foundation of impoliteness. Disrespectful behaviour like impoliteness is supported by expectations, desires and or beliefs about certain values (Fatimah & Arifin, 2014).
In foreign language learning, impoliteness has been studied in its forms and strategies. One of the studies examining impoliteness in foreign language learning is Impoliteness in EFL, for example impoliteness in foreign language learners' complaining behaviours across social distance and status levels (Wijayanto et al., 2017;Wijayanto, 2019). In addition to foreign language learning, impoliteness is examined in film such as research conducted by Yusniati (2022) entitled Impoliteness Strategies Found in Akeelah and the Bee Movie. Impoliteness is also examined on various of social media such as impoliteness on Instagram used by Haters of Lady Gaga (Permata et al., 2019), impoliteness strategies used in a politician's Facebook by Halim (2015). On YouTube, impoliteness was studied by Arrasyd and Hamzah (2019) entitled Impoliteness Strategies in You Tube Comment Section Found in Indonesian Presidential Debate 2019.
The purpose of the current research is to continue the research conducted by Apriliani (2023) but with a different focus and research questions. In the previous research, the researcher only focused on the types of impoliteness strategies used by haters, while in the current research, researchers did not only focus on the types of impoliteness strategies but also explored emotions expressed via impoliteness by haters. The researchers studied the impoliteness used by haters through comments on the @FIFA Instagram account. There are several reasons the researchers studied the impoliteness used by haters on this Instagram account. First, Instagram is one of the most popular social media and almost everyone uses Instagram. Second, there are many comments using impolite language that haters use on Instagram as a form of their disappointment. Third, FIFA currently has become a trending topic on Instagram among football fans because of its controversial decisions. Many people are disappointed as FIFA did not put sanction to Israel when Israeli soldiers fired tear gas at a football match in Palestine. Haters has assessed FIFA of having a double standard against Israel. The impoliteness used by haters on FIFA's Instagram comments was examined. The following is an example of a comment from a hater using impoliteness on the @FIFA Instagram. @fsssaaammm: "double standard. When two different parties make the same mistakes, the treatment they get is also different".
The excerpt above shows that a hater used impoliteness to express his disappointment with FIFA's decision that is considered to have double standards against the Israel football team. This is because FIFA gave a different treatment to Israel. In the 2022 World Cup, the Russian football team was banned from competing because the Russian government attacked Ukraine, while Israel was still able to take part in the 2023 U-20 World Cup even though the Israel army attacked Palestine during the football event in Palestine. This made the haters furious because they felt that FIFA made an unfair decision.
---
Literature Review Pragmatics
Impoliteness is a language phenomena that relates to the use of a language as intended by its users, commonly studied under pragmatics. According to Mey (2001), pragmatics studies how language is used in interpersonal communication, which is influenced by social norms. According to Yule (1996), pragmatics is the study of the relationships between linguistic forms and the users of those forms; only pragmatics allows humans into the analysis in this three-part distinction. Meanwhile According to Levinson (1983) the study of pragmatics focuses on how language interacts with the context in which it is used to express meanings by language users. In short, pragmatics is a branch of linguistics that studies the relationship between the external context of language and the meaning of language through the interpretation of the situation in which the language is used by its users. In other words, pragmatics is a branch of linguistics that studies speaker's meanings or intentions based on the context of the situation at the time of the speech occur. In the present study, the intention of speakers (haters) was examined under the use of impoliteness.
---
Impoliteness
In a simple definition, impoliteness is the opposite of politeness and it is intentionally conducted (Wijayanto, Hikmat, and Prasetyarini, 2018). Expressions that are included in the politeness generally support the face, meanwhile the impoliteness strategy is against it. According to Bousfield (2008) impoliteness constitutes the communication intentionally gratuitous and conflictive verbal face threatening acts (FTAs) which are purposefully delivered: (1) unmitigated, in context where mitigation is required, and/or, (2) with deliberate aggression that is, with the face threat exacerbated, 'boosted', or maximized in some way to heighten the face damage inflicted. Culpeper (2011) stated that Impoliteness strategies refer to the approach to attack face want, whereas politeness strategies refer to the way to assist or redress face want. Impoliteness appears as a form of emotional outburst against hatred. Expressions of impoliteness also emerges because of the urge to vent frustration. Culpeper (1996) has classified five types of impoliteness strategies, namely Bald on record impoliteness, Positive impoliteness, Negative impoliteness, Sarcasm or Mock Politeness, and Withhold Politeness. The following is a detailed explanation of the types of impoliteness strategy based on Culpeper (1996)
---
Bald on record impoliteness
This strategy is used to attack someone directly without considering the interlocutor's face. Culpeper states that FTAs are conducted in a direct, clear and facethreatening manner (FTA) in an unambiguous and concise manner. This strategy occurs because the speaker deliberately does not want to maintain good relations with the interlocutor. An example of bald on record impoliteness is "Shut up you dumb" This sentence is said directly and frankly in an unambiguous way that makes the sentence considered as bald on record impoliteness.
---
Positive impoliteness
This strategy is designed to be destructive to the desire of the positive face of the interlocutor. The purpose of this positive face is the desire of each individual to be respected, valued and also needed by others. Several sub-strategies in positive impoliteness include ignoring others, rejecting, disassociating from others, being disinterested, unconcerned, making others uncomfortable. An example of positive impoliteness is "No keep away! Go home, we don't want you!" This utterance is a speaker's rejecting of the interlocutor so that the sentence is considered as positive impoliteness.
---
Negative impoliteness
The use of this strategy is to destroy the desires of the interlocutor negative face, or attack the interlocutor negative face. Several sub-strategies or outputs on negative impoliteness include frighten, condescend, scorn or ridicule, be contemptuous, do not treat the other seriously, be little the other, invade the other's space, explicitly associate the other with a negative. An example of negative impoliteness is "Babyish, isn't it?" The term 'babyish' is used to express scorn at someone so it is considered as a negative impoliteness.
---
Sarcasm or Mock Politeness
Sarcasm can be used to contrast the meaning of someone's feelings towards something. The intended meaning is in contrast to the polite intention. Through this strategy the speaker use politeness language but with impolite purposes. In other words, via this strategy, politeness is used by speakers to generate impolite meanings. This is included in the impoliteness strategy because the speakers are not really sincere with what they say. An example of an utterance with sarcasm is when someone talks to the interlocutor if "today is a good day" but in reality, that the day is a bad day. In case of impoliteness, the sarcastic utterance is to attack other interlocutors.
---
With-hold Politeness
With-holds Politeness occurs when the speaker does not carry out the politeness strategy desired by the listener or is silent. In other words, it is situation that a speaker has to show politeness as commonly required, but in fact he/she does not provide it. This is considered as intentional impoliteness. For example, when someone has been helped by another person, but the person being helped deliberately does not express gratitude, it is considered as with-hold politeness. Therefore being silent and failing to thank are the realization of this strategy.
Impoliteness can be induced by some factors. Culpeper (1996) reported that unequal social power, intimacy, and a conflict of interest could provoke impoliteness. People with more social powerful tend to use impoliteness to those with no social power. Those who have close relationship also tend to do impoliteness. Conflict of interest is the commonest trigger of impoliteness by which each party is trying to defend their own interest. A study by Bousfield (2007) found that offensive situations could trigger impoliteness. Spencer-Oatey (2005) stated that negative emotions can regulate linguistic behaviour including impoliteness. Some studies found that emotions and impoliteness go together. For example Kienpointner (2008) reported that specific negative emotions can provoke the use of impoliteness. Wijayanto et al. (2018) concluded that negative emotions such as anger, annoyance, and hatret could provoke impoliteness. They explained that expectation, hopes, and right that are not acceptable to speakers listeners can induce negative emotions. These negative emotions provoke the use of impoliteness.
---
Emotion Knowledge
Emotions are reactions carried out by the body as an effect of a certain situation and condition and they are aspects that determine a person's attitude. They are represented as common because they play an important role in social interactions. Emotions interact with information about the situation and its norms, and all that information is represented in emotional schemas in memory (Culpeper et al., 2014). Emotions are classified into two types, namely positive emotions and negative emotions. In positive emotions, feelings of happiness, cheerfulness, peace and joy will usually arise. Meanwhile, negative emotions are the opposite of positive emotions. Negative emotions will cause feelings of anger, disappointment, sadness and hatred. In the expression impolite, the emotions that are caused are negative emotions. According to Shaver et al., (1987) negative emotions are classified into 3 types namely anger, sadness and fear.
---
Anger
Anger is a form of emotional expression that is used to express anger and hatred towards certain situations. There are several subordinates in anger expressions, including torment, envy, jealousy, discussed, revulsion, contempt, rage, outrage, fury, wrath, hostility, ferocity, bitterness, hate, loathing, scorn, spite, vengefulness, dislike, resentment, exasperation, frustration, aggravation, irritation, agitation, annoyance, grouchiness, and grumpiness.
---
Sadness
Sadness is an expression used to describe sad feelings. There are several types of subordinates included in the sadness group, including pity, sympathy, alienation, isolation, neglect, loneliness, rejection, homesickness, defeat, ejection, insecurity, embarrassment, humiliation, insult, guilt, shame, regret, remorse, dismay, disappointment, displeasure, depression, despair, hopelessness, gloom, glumness, sadness, unhappiness, grief, sorrow, misery, melancholy, agony, suffering, hurt, anguish.
---
Fear
Fear is an emotion that arises because of feelings of fear. This emotion will develop into anxiety. There are several subordinates included in fear, including anxiety, nervousness, tenseness, uneasiness, apprehension, worry, distress, dread, alarm, shock, fear, fright, horror, terror, panic, hysteria, mortification, pity, sympathy.
---
Method
This study used a qualitative approach which described and explored the use of impoliteness and emotional expressions involved in the impoliteness. The qualitative research was conducted to observe social phenomena and human problems and to explore and understand meaning thoughts of a person or society towards the phenomenon itself (Creswell, 2012). The type of the qualitative research employed was descriptive research that was conducted by taking data form of words or pictures rather than the numbers that the result contains quotes from data to illustrate and strengthen presentations (Bogdan and Biklen 1982). Bodgan and Biklen (in Sanjaya, 2022) also stated that if the research data were verbal or social behavior needs to be analyzed descriptively, and researchers need to use qualitative methods.
The present research used the theory from Culpeper (1996) to describe the types of impoliteness strategy and the theory of emotion knowledge by Shaver et al., (1987) to explore the emotional expressions. The object of this research was expressions of impoliteness. The research data was comments written by haters that contained impoliteness. Researchers took data sources from social media, namely comments from social media users on the Instagram account @FIFA. The techniques used to collect data are observation and documentation. There were several steps taken to collect data: 1. Researchers observed the @FIFA Instagram account 2. Selected data that can be categorized as an expression of impoliteness 3. Screenshot comments that contain impoliteness 4. Paid attention to the expressions of impoliteness made by haters.
To do data analysis, we carried out the following steps. The first step was describing the expressions of impoliteness found among Instagram users on the @FIFA Instagram account using Culpeper's (1996) impoliteness theory and then identified the emotional expressions of haters on the @FIFA Instagram account using the emotional knowledge theory by Shaver et al. (1987) and the last was drawing conclusions.
---
Results
FIFA is a football federation throughout the world. All matters related to football are covered by FIFA. When the U20 World Cup championship was about to be held, FIFA made a decision which was considered unfair because it allowed the Israeli football team to take part in the World Cup championship. This made haters disappointed because FIFA should have banned Israel from participating in the World Cup as a form of sanction against Israel because Israeli soldiers had attacked a football match in Palestine. Previously, FIFA had banned Russia from participating in the world cup as a form of sanction against Russia which had attacked Ukraine, but in the same condition, Israel had attacked Palestine, but FIFA still allowed Israel to compete in the world cup championship. This made haters disappointed and felt that FIFA was unfair in making decisions, giving rise to negative responses from haters which were expressed in comments on the @FIFA Instagram account.
The present research observed the use of impoliteness by haters found in the comments of the @FIFA Instagram account. First the present research analysed the types of impoliteness strategies used by haters and secondly the present research observed the emotions of haters expresses via the impoliteness. To analyse the first research question, the present research used the impoliteness strategies by Culpeper (1996) and to analyse the second research question, we used the emotional knowledge theory from Shaver et al., (1987).
---
Types of impoliteness strategy on Instagram FIFA Bald on Record Impoliteness
This strategy is used to attack someone directly without considering the face of the person they are talking to. Culpeper (1996) stated that FTAs are carried out in a direct, clear, and face-threatening manner (FTA) in an unambiguous and concise manner. This strategy occurs because the speaker deliberately does not want to maintain a good relationship with the person he is talking to. In the data that has been collected, researchers found 38 data in the forms of bald impoliteness strategies. Due to space limit, we only provided 3 data for analysis. The following are the data taking the form of Bald on record impoliteness strategy: Datum 01/IG/FIFA/31-03-2023/_pnj29: "Open your eyes" The data is a comment uploaded by the hater's account @_pjn29 addressed to FIFA via the Instagram account @fifa. The short utterance used by hater, implied that the hater felt disappointment with the decisions taken by FIFA because they are considered more pro-Israel. This is because FIFA continues to give permission to Israel to take part in the world cup even though the Israeli army has caused chaos in a football match in Palestine, while FIFA has not given permission to Russia to take part in the world cup because Russia has caused chaos in Ukraine. The language style used by haters is informal. From the comment of the haters, the haters stated that FIFA should open its eyes so that it is fair in making decisions. From this data, it can be seen that haters attacked the interlocutor, namely FIFA, firmly, directly and without ambiguity. According to Culpeper (1996) if someone attacks the interlocutor firmly, directly and without ambiguity then that is a form of bald on record impoliteness.
Datum 11/IG/FIFA/31-03-2023/fsssaaammm: "Double standard, when two different parties make the same mistakes, the treatment they get also different".
The data is a comment uploaded by the @fsssaaammm account addressed to FIFA via the @fifa Instagram account. Based on the utterance used by the hater, it can be seen that the hater underestimates FIFA's performance because FIFA is unfair in making decisions and FIFA continues to give permission to Israel to take part in the world cup even though the Israeli army has created chaos in football matches in Palestine, while FIFA has not given permission to Russia to take part in the world cup because Russia has caused chaos in Ukraine. Double standard is a sentence written by the hater that attacks the interlocutor directly and without ambiguity. The language style used by haters is informal. According to Culpeper (1996), if someone attacks the interlocutor directly and without ambiguity, then it is a form of bald on record impoliteness.
Datum 32/IG/FIFA/31-03-2023/msophiann: "Banned Israel football!!!!@fifa" The data is a comment uploaded by the @msophiann account addressed to FIFA via the @fifa Instagram account. The hater's comment shows that he/she is protesting against the decision taken by FIFA which allows Israel to take part in the World Cup. The sentence "ban Israel" is written by the hater who attacked the interlocutor firmly, directly and unambiguously, who asked FIFA to ban Israeli football from participating in the world cup championship. The language style used by haters is informal. According to Culpeper (1996) if someone attacks the interlocutor directly, firmly and without ambiguity, then it is a form of bald on record impoliteness. Culpeper (1996) stated that this strategy is designed to destroy the positive face of the interlocutors. The meaning of this positive face is the desire of each individual to be respected, appreciated and also needed by other people. Some sub-strategies in positive impoliteness include ignore, snub the other, disassociate from the other, be disinterested, unconcerned, unsympathetic, use inappropriate identity markers, use obscure or secretive language and Seek disagreement. We found 13 data in the form of the positive impoliteness strategies. In this section we only present 3 data.
---
Positive Impoliteness
Datum 19/IG/FIFA/31-03-2023/z.alan21: "Swine FIFA, has no pride. Russia is banned but Israel is still roaming the sphere of football, barbaric. FIFA is not fit to be the parent of an organization, to hell with FIFA" The data is a comment uploaded by the @z.alan21 account addressed to FIFA via the Instagram account @fifa. The data demonstrtes that the hater feel disappointed with FIFA's performance. He/she thinks that FIFA is unfair in making decisions because it gives different treatment to each country. The language style used by the hater is informal. Regarding the use of words in this comment, the hater uses an abusive and profane language. According to Culpeper (1996), if someone attacks the interlocutor by applying abusive or profane language then this is a form of positive impoliteness with the sub strategy of swearing, or using abusive or profane language.
Datum 31/IG/FIFA/31-03-2023/idepst: "FIFA is a pet of Israel!" The data is a comment uploaded by the @idepst account addressed to FIFA via the @fifa Instagram account. The hater wrote that FIFA is pet of Israel. The data implies that that there is anger expressed by the hater because of the decision taken by FIFA, is considered to be more pro-Israel. This anger was caused by FIFA's decision to continue to give permission to Israel to take part in the world cup even though the Israeli army had caused chaos in a football match in Palestine, while FIFA had not given permission to Russia to take part in the world cup because Russia had caused chaos in Ukraine. The language style used by haters is informal. The hater'comment attacks the positive face of the interlocutor by using call the other names by the word identity "pet". According to Culpeper (1996) if someone attacks the interlocutor using another name, then this is a form of positive impoliteness with the sub strategy call the other names.
Datum 24/IG/FIFA/31-03-2023/mystogan_skuy: "FIFA double standard you FIFA baggers and under wears you all like pig and dog."
The data is a comment uploaded by the @mystogan_skuy account addressed to FIFA via the @fifa Instagram account. The hater wrote that FIFA is like pig and dog. Based on the data, it can be analysed that there is a feeling of dissatisfaction expressed by the hater because the decisions taken by FIFA is considered more pro-Israel. This is due to FIFA's decision to continue to give permission to Israel to take part in the world cup even though the Israeli army has caused chaos in a football match in Palestine, while FIFA has not given permission to Russia to take part in the world cup because Russia has caused chaos in Ukraine. The language style used by haters is informal. Based on the comment written by the hater that he/she used word association of FIFA with other names "like pig and dog". According to Culpeper (1996), if someone attacks the interlocutor using another name, then this is a form of positive impoliteness with a substrategy of calling the other names.
---
Negative Impoliteness
The use of this strategy is designed to destroy the desires of the interlocutor negative face, and also attack the interlocutor negative face. Several sub-strategies or outputs on negative impoliteness include frighten, condescend, scorn or ridicule, be contemptuous, do not treat the other seriously, be little the other, invade the other's space, explicitly associate the other with a negative. In the data that has been collected, researchers found 25 data which is a form of the negative impoliteness strategy (see appendix). However, in chapter IV, researcher will only analyze 3 data. The following are several of data found by researchers that are a form of the negative impoliteness strategy:
Datum 15/IG/FIFA/31-03-2023/msidik.aja: "FIFA stupid, bunch of fools" The data is a comment uploaded by the @msidik.aja account addressed to FIFA via the @fifa Instagram account. Based on the comment written by haters, it can be seen that the hater got disappointed with FIFA. This is because FIFA applies different treatments to each country, so haters think that FIFA is unfair. The language style used by haters is informal. Based on the comment, it can be seen that haters use words that ridicule and insult the interlocutor. According to Culpaper (1996), if someone attacks the interlocutor by using words that are ridicule and insulting, then this is a form of negative impoliteness with the sub strategy condescend, scorn or ridicule.
Datum 30/IG/FIFA/31-03-2023/muhammadazmann: "Shame on you!"
The data is a comment uploaded by the @muhammadazmann account addressed to FIFA via the @fifa Instagram account. Based on comment written by haters, it can be inferred that the hater got angery as he is disappointed with FIFA which is to be more pro-Israel. The hater is disappointed because FIFA still gives permission to Israel to take part in the world cup even though the Israeli army had caused chaos in a football match in Palestine, while FIFA had not given permission to Russia to take part in the world cup because Russia had caused chaos in Ukraine. The language style used by haters is informal. Based on this comment, it can be seen that the haters use derogatory words. According to Culpaper (1996) if someone attacks the interlocutor by using derogatory words then it is included in negative impoliteness with sub strategy condescend, scorn or ridicule.
Datum 17/IG/FIFA/31-03-2023/edomlia990: "Hey FIFA, don't involve to Israel!! Or moses will kill you and your family one by one!!!bravo Israel" The data is a comment uploaded by the @edomlia990 account addressed to FIFA via the @fifa Instagram account. Based on the comment written by haters, it appears that the hater is protesting against the decisions made by FIFA. This is because FIFA continues to give permission to Israel to take part in the world cup even though the Israeli army has caused chaos in a football match in Palestine, while FIFA has not given permission to Russia to take part in the world cup because Russia has caused chaos in Ukraine. The language style used by haters is informal. Based on the sentence in the comment, it can be seen that haters use frightening words to attack the interlocutor.
According to Culpeper (1996), if someone attacks the interlocutor by frightening, then that is a form of negative impoliteness with a frighten sub strategy.
---
Sarcasm or Mock Politeness
Sarcasm can be used to contrast the meaning of someone's feelings towards something else. The intended meaning of mock politenee is in contrast with what the speaker wants to express but with politeness words to attack others implicitly. In this strategy, politeness is used but the meaning is not. This is included in the impoliteness strategy because the speaker is not really sincere with what he says. The researchers found 2 data which in the form of sarcasm or mock politeness strategy. The following are the data of the sarcasm or mock politeness strategy.
Datum 02/IG/FIFA/31-03-2023/astrophiliamon: "Are you sick? @fifa" The data is a comment uploaded by the @astrophiliamon account addressed to FIFA via the @fifa Instagram account. Based on this comment, it can be seen that the haters are satirizing the interlocutor, namely FIFA. This was a form of protest from the hater who was disappointed with FIFA's decision which was considered unfair because it gave different treatment to each country. The language style used by the hater is informal. Based on the comments written by haters, it can be seen that the comment has another meaning. The sentence "are you sick" is not an expression that actually asks about the addressee's health condition, but is an expression of ridicule. According to Culpeper (1996), if someone attacks the interlocutor by using words that have no real meaning, then this is a form of sarcasm or mock politeness strategy.
Datum 45/IG/FIFA/31-03-2023/islamicelevenofficial: "Sorry our eyes are blind when we see Israel, but our eyes are sharp when it's Russia… So let's continue regardless of the damage done by Israel" The data is a comment uploaded by the @islamicelevenofficial account addressed to FIFA via the @fifa Instagram account. Based on comment written by haters, it can be understood that the hater felt disappointed with the decisions taken by FIFA. The reason is that FIFA is considered to provide different treatment to each country. FIFA has given permission to Israel to take part in the world cup even though the Israeli army has caused chaos in a football match in Palestine, while FIFA has not given permission to Russia to take part in the world cup because Russia has caused chaos in Ukraine. The language style used by haters is informal. The sentence from the hater's comment "sorry, our eyes are blind when we see Israel, but our eyes are sharp when we see Russia", is an allusion to FIFA, whose eyes seem blind when they see Israel, but sharp when they see Russia. According to Culpeper (1996), if someone attacks the interlocutor by using words that have no real meaning, then this is a form of sarcasm or mock politeness strategy
---
With-hold politeness
Culpeper defines With-holds Politeness occurs when the speaker does not carry out the politeness strategy desired by the listener or is silent. It is a strategy that is used by people to expect politeness things because the polite things are not used. Being silent and failing to thank are the realization of this strategy. The meaning of politeness that is expected in a certain situation but is left out for some reason. This is considered as intentional impoliteness. In this study, researchers did not find data that included a with-hold politeness strategy.
---
The emotional expressions in impolite comment from hater on Instagram FIFA Anger
Datum 08/IG/FIFA/31-03-2023/arif_12h: "Thus is stupid federation, forza"
The data was a comment uploaded by the account @arif_12h addressed to FIFA via the Instagram account @fifa. Based on the sentences written by the hater, it can be perceived that the hater felt angry and scorned the interlocutor. This is because FIFA carries out different treatment between Israel and Russia. This made the hater angry and the hater thought that FIFA was unfair in making decisions and was considered to be more pro-Israel. Based on this, the hater's emotional expression is anger with subordinate scorn.
Datum 15/IG/FIFA/31-03-2023/msidik.aja: "FIFA stupid, bunch of fools" The data was a comment uploaded by the @msidik.aja account addressed to FIFA via the @fifa Instagram account. Based on the sentences written by the hater, it can be understood that the hater is feeling angry and scornibg the interlocutor. This is because FIFA's decision is considered to be more pro-Israel. FIFA continues to give permission to Israel to take part in the world cup even though the Israeli army has caused chaos in a football match in Palestine, while FIFA has not given permission to Russia to take part in the world cup because Russia has caused chaos in Ukraine. Based on this, the hater's emotional expression is anger with subordinate scorn.
Datum 22/IG/FIFA/31-03-2023/catatangame: "FIFA is slave for Israel" The data was a comment uploaded by the @notangame account addressed to FIFA via the @fifa Instagram account. In the comment was written by the hater, there is the word "slave" which can be interpreted as an insult. This was an expression of anger from haters who protested against the decision that FIFA had made. Hater thinks that FIFA is afraid of Israel and does not dare to impose strict sanctions against Israel, so it continues to give permission for Israel to take part in the World Cup. Based on this, a hater's emotional expression is an angry expression with a subordinate type of contempt.
---
Sadness
Datum 28/IG/FIFA/31-03-2023/antorenz: "Please, FIFA must react to the Israel army's attack on the match in Palestine, don't stay silent, you must be fair" The data was a comment uploaded by the @antorenz account addressed to FIFA via the @fifa Instagram account. Based on the comment was written by the hater, the hater felt disappointed because the hater thought FIFA was unfair in making decisions. Apart from that, hater think that FIFA is just silent about the incident that occurred between Israel and Palestine. Based on this, it can be concluded that the hater's emotional expression is sadness with a subordinate type of disappointment. The data was a comment uploaded by the @anangtyantoo account addressed to FIFA via the @fifa Instagram account. The comment was written by hater showing the feelings of disappointment regarding the decision that FIFA has taken because it is considered to be more pro-Israel. This is because FIFA continues to give permission for Israel to take part in the world cup even though the Israeli army has made a mess of a football match in Palestine while FIFA has not given permission for Russia to take part in the world cup because Russia has made a mess in Ukraine. Based on this, the emotional expression of the hater is anger with a subordinate type of disappointment.
Datum 52/IG/FIFA/31-03-2023/_gilangg01: "Israel attacks the Palestine national team where are you FIFA? Loser" The data was a comment uploaded by @_gilangg01 account addressed to FIFA via the @fifa Instagram account. Based on the comment written by the hater, it can be seen that the hater was feeling disappointed with FIFA which was considered unfair. The comment was written by hater showing the feelings of disappointment felt by haters regarding the decision that FIFA has taken because it was considered to be more pro-Israel. This is because FIFA continues to give permission for Israel to take part in the world cup even though the Israeli army has made a mess of a football match in Palestine while FIFA has not given permission for Russia to take part in the world cup because Russia has made a mess in Ukraine. Based on this, the emotional expression of the hater is sadness with subordinate disappointment.
---
Discussion
The researchers analysed the types of impoliteness and emotional expression from haters' comments on FIFA Instagram posts. We collected and analysed 78 data. We found 38 (48%) data included as bald on record impoliteness, 13(17%) data were positive impoliteness, 26 (32%) data were negative impoliteness and 2 (3%) data were sarcasms or mock politeness, meanwhile we did not find any data included as with-hold politeness. We found that bald on record impoliteness was the most dominant type of impoliteness. Apart from this, we also analysed the emotional expressions involved in the impoliteness based on the emotional knowledge proposed by Shaver et al., (1987). The study analysed 74 data relating to emotions. We found 44 data that were included as anger, and 30 data were included as sadness emotional expressions. Meanwhile, fear emotional expression, was not found. We found that anger was the most dominant of emotional expression.
In line with Bousfield (2008), the impoliteness used by the haters contained gratuitous and conflictive verbal face threatening acts (FTAs) which were purposefully delivered to attack VIVA. The findings of this study confirm Fatimah and Arifin (2014) in which haters used impoliteness because what they expected and desired from the organisation (VIVA) was against their values, justice or fairness, expectation and wants. Nevertheless, the findngs of this study are also differentt from those of some previous research. For example Yusniati, (2022) did not find impoliteness in the form of sarcasm or mock politeness. This could be because the data of her study was taken from films, whereas this research collected data from haters' reactions in social media comments in which sarcasm are commonly used. Unlike the findings of Krisdayanti (2020), the present study could not find out the purposes or intentions of using impoliteness. In addition, while the present study could find negative emotions involved in the impoliteness, the previous research did not (e.g., Permata et al., 2019;Shinta et al., 2018). Nevertheless in line with Wijayanto et al. (2018), our findings confirm that negative emotions can become triggers of impoliteness. The findings also agree with Spencer-Oatey (2005) who argued that negative emotions can regulate linguistic behaviour such as impoliteness, which is confirmed by our data. For example, many haters got very angry and disappointed with VIVA and the emotions trigerred them to use impolite language. This could be because emotions and impoliteness could go together as a communication mode (Bousfield, 2007;Wijayanto et al., 2018).
---
Conclusion
Based on the data analysis that has been carried out, it can be understood that FIFA's decision to continue to allow Israel to take part in the World Cup football match has reaped many objections from haters. In the comments column on the @FIFA Instagram account, many haters protested FIFA's decision. The researchers found some impoliteness strategies in hater comments, including bald on record impoliteness, positive impoliteness, negative impoliteness, and sarcasm or mock politeness. Of these types, the most dominant strategy was bald on record impoliteness. Regarding emotional expressions, we found two emotional expressions based on shaver's emotional knowledge theory in which anger was most dominant emotional expression found. Hopefully the present research could contribute to the development of research of impoliteness in cyber communication. Apart from that, the researchers would like to make this simple research as a starting point to develop further research or it can be used as a reference for further research of impoliteness in social media with different methods and aspects. |
This article has been peer-reviewed through the journal's standard double-anonymous peer review, where both the reviewers and authors are anonymised during review. | Introduction
Over 35 million children are living outside their country of birth, including 7.1 million refugee children, with many facing disruptions in their education and struggles to access quality schooling (IDAC, 2021;UNHCR, 2019). As global migration has continued, there is no doubt that interest in diversity and inclusion within school systems has grown exponentially over the last three decades. While migration is nothing new (de Haas et al., 2020), and is of ongoing concern within the sphere of education (Welply, 2021), with the rise of social media and rolling news, societies have become increasingly sensitised to world events that may previously have appeared to be at a distance (Danilova, 2014). Added to this are significant movements of people within and away from regions of the world affected by intractable conflict, environmental and food emergencies, wars, poverty and struggling economies and infrastructure (UN, 2019). As host countries of migrant people are formally bound by national and international law to provide educational opportunities to migrant children (Mendenhall et al., 2017), such movements have, inevitably, led to a wider range of children with complex backgrounds and needs entering schools. Meanwhile, as societies become more sensitised, and as schools grapple with increased intakes and the seemingly intensified challenges, so too have pre-service teacher education programmes (in some countries called 'initial teacher education') been faced with the challenge of addressing the needs of teachers as they enter such diverse classroom environments (Gay and Kirkland, 2013), with a subsequent gradual increase in the amount of diversity-related content in teacher education programmes (Silverman, 2010). The role of teachers in the education of migrant learners is held to be crucial (UNESCO, 2019); equally, pre-service teacher education is generally considered to be essential in equipping trainee teachers to teach diverse groups of learners, ultimately creating equal educational opportunities for all, regardless of background (Darling-Hammond, 2000). However, while there has been a 'pro-inclusive turn' (Bačáková and Closs, 2013) involving a recognition that education must be inclusive of learners' 'multiple identities', and much reform in teacher education over the last three decades, some scholars have been disappointed by how little progress has been made; as Ryan et al. (2019: 259) have argued, 'the crucial priority of preparing teachers for increasingly diverse classrooms has not been addressed'.
There is a small but growing body of work that focuses specifically on pre-service and in-service teacher education for working with migrant learners, evidenced by the articles in this review, as well as by online training courses available (for example, British Council, 2020) and by chapters in larger collections on teacher education (Cochran-Smith et al., 2008;Peters et al., 2017). However, given that the 'migrant' identity can be subsumed within wider talk of diversity, there are other sub-fields that are relevant here. For example, the long-standing area of multicultural (teacher) education (Banks andBanks, 2003, 2019;Cochran-Smith, 2003;Gay and Howard, 2000;Ladson-Billings, 1999;Larkin and Sleeter, 1995;Nieto, 2000), as well as work on migrant learners' and their teachers' experiences (Adams and Kirova, 2007;Hanna, 2020Hanna, , 2022;;Karsli-Calamak and Kilinc, 2021;Pastoor, 2017). There is also instructive work on teacher identities (Beauchamp and Thomas, 2009;Zembylas and Chubbuck, 2018) and teacher beliefs (Ashton, 2014;Pajares, 1992), particularly concerning diversity (Devine, 2005;Pohan, 1996;Silverman, 2010), recognising, as it does, the links between identities, beliefs and practice, and especially how these identities and beliefs manifest themselves in how empathetically a (student) teacher might behave in diverse classrooms (Gay and Howard, 2000).
Despite the growing scholarship mentioned above, there remains a particular gap when it comes to pre-service teacher education for teaching new or recent migrant learners (children and young people who were born outside the country where they are now attending school) in compulsory education. (The term 'migrant' is much debated, and there are many variations of this term used differently in different country contexts and disciplines; for example, 'first-generation immigrant children', or even 'third-culture kids'. I will use the term 'migrant learner', as it includes children and young people who were born outside the country where they are now attending school. This can include refugees, asylum seekers, economic migrants and others who intend to stay in the new country, and those who do not. This can also include children and young people who move multiple times within the same country.) This gap occurs despite the fact that migration continues to be a global phenomenon, with migrant learners and their teachers facing challenges that go beyond dealing with cultural difference, xenophobia and racism (especially for non-White migrants in White-majority countries), to include the possibility of trauma from the upheaval of migration caused by war, poverty and political unrest, resulting in interrupted education and disrupted family life (Mendenhall et al., 2017). It should also be acknowledged, however, that such children do indeed hold multiple identities (Kymlicka, 1995), aspects of which increase and decrease in salience to them and their education at different points in their lives, although there is not adequate space to address these other aspects of identity in this review article (reflecting, indeed, the fact that this is not regularly highlighted in the articles included in this review). Therefore, this article argues that learning to teach migrant learners deserves particular attention, and aims to contribute to knowledge in this area by offering a critical qualitative review of articles published between 2002 and 2021 on pre-service teacher education that report on a variety of research and teacher education initiatives in preparing trainee teachers to work with migrant learners in compulsory education. It focuses on the following interwoven questions:
1.
What are the beliefs of pre-service teachers about teaching migrant learners? 2.
What role do pre-service teachers' migration identities and empathy have on their beliefs about teaching migrant learners? 3.
What role can and should pre-service teacher education play in shaping these beliefs and identities?
A presentation and synthesis now follow the current literature relevant to the field. They reveal two important and interrelated themes: trainee teachers' beliefs in the context of societal (non-)diversity; and teacher identities and empathy, which, after a presentation of the scope of the research review, are then used to analyse the articles selected. The ensuing discussion of the findings problematises the development of empathy as an aim, as well as the role of critical self-reflection in mediating the attitudes of trainee teachers. The article ends by proposing increased critical researcher-teacher collaboration in future research.
---
Multicultural teacher education
Multicultural education can be understood in many different ways (for an overview, see Cochran-Smith, 2003). However, its essence may be expressed as the view that education should create equal educational opportunities for both minorities (for example, ethnic, racial, cultural, religious, linguistic) and the majority in societies, with an emphasis on adapting the school to reflect social diversity, and to respect diverse backgrounds through pedagogies that are relevant and responsive to a diversity of cultural backgrounds (Ladson-Billings, 1995). Therefore, it is argued, pre-service teacher education should be directed towards this purpose, given its essential role in equipping trainee teachers to enable all children and young people to access learning opportunities (Darling-Hammond, 2000). While this review was motivated by the challenge that the author faced in finding scholarship on pre-service teacher education that has a very particular focus on learning to teach migrant learners, the area of multicultural teacher education appears to most easily encompass concerns about migrant learners, given its interest in respecting the diverse backgrounds of learners. Therefore, it is this scholarship that was chosen as the foundation for this review.
There has been long-term academic engagement in the field of multicultural education and multicultural teacher education (for example, Banks andBanks, 2003, 2019;Cochran-Smith, 2003;Gay and Howard, 2000;Ladson-Billings, 1999;Nieto, 2000). Key publications have included the Handbook of Research on Multicultural Education (Banks and Banks, 2003), the Routledge International Companion to Multicultural Education (Banks, 2009) and Developing Multicultural Teacher Education Curricula (Larkin and Sleeter, 1995). In addition, there are chapters on teacher education and diversity in collections focusing on teacher education more broadly. While the field has undoubtedly been dominated by scholars from the USA, there are also contributions from a range of other countries, including Australia (Inglis, 2009), the UK (Bhopal and Rhamie, 2014;Race and Lander, 2014), Thailand (Arphattananon, 2018) and South Africa (Lemmer et al., 2014). Additionally, there is a growing body of work on migrant learners and their teachers (Adams and Kirova, 2007;Arnot et al., 2016;Hanna, 2020Hanna, , 2022;;Karsli-Calamak and Kilinc, 2021;Maher, 2020;Pastoor, 2017;Urias, 2012).
In terms of specialist work focused on teacher education and migrant learners, there are some publications of note. In addition to the articles that will be discussed in this review, Springer's Companion to Research in Teacher Education (Peters et al., 2017) has a chapter on 'Teacher education, research and migrant children', and there are several chapters in the Handbook of Research on Teacher Education (Cochran-Smith et al., 2008) that include migration in their considerations of diversity. There are also key journals that periodically devote space to the topic, such as Teacher Educator's 2018 Special Issue on 'Immigration and teacher education' (articles from which are included in this research review) and editorials in journals such as the European Journal of Teacher Education that focus on wider inclusion issues (see, for example, Florian and Camedda, 2020;Livingston, 2019). Nevertheless, it remains rare to find consideration of multicultural teacher education and migrant education in the same review. Therefore, this review aims to bridge this gap.
Synthesising these strands of scholarship, some shared concerns emerge. First, there is often a focus on language learning and attainment, and the perception that migrant learners underperform. This can sometimes lead to a 'deficit model' that focuses on what the child is lacking rather than the knowledge that the child may hold, and neglects the non-homogeneous nature of migrant children. However, Goodwin (2017) has noted that there has also been a growing appreciation of cultural issues, particularly noting the ideas of cultural disorientation and being caught between cultures. Second, there is continued recognition that while specific training is required to work effectively with migrant learners, provision for such training has not improved; indeed, in some cases, it has been reduced, along with reduced funding and time for teacher education in general, in addition to fear of diversity and reluctance to deal with racism held by trainees, despite efforts to promote critical reflection among trainees (Gay and Howard, 2000). Finally, there is a shared frustration that research in migration and education in general, and teacher education in particular, continues to be lacking. This is despite the apparent rise in practice-based research, and many and varied attempts to engage teachers in research and encourage researchers to collaborate with practitioners (Ryan et al., 2019). This is, surely, a reminder that the challenge of preparing teachers to teach migrant learners is as significant as it ever was. Aspects of these three areas of concern will re-emerge later in the findings of this review.
---
Teacher beliefs and identities
Research on teacher beliefs and identities is also relevant here. Scholarship on teacher beliefs makes a strong case for the links between beliefs, identities and practice (Ashton, 2014;Pajares, 1992), not least when it comes to teacher beliefs about diversity, influenced by teachers' backgrounds, experiences and identities, and how these beliefs manifest themselves in how a teacher behaves in diverse classrooms (Osler and Starkey, 2010;Pohan, 1996;Rodríguez-Izquierdo et al., 2020;Silverman, 2010). Studies in this area highlight not only the risk of negative or stereotypical beliefs among student teachers detrimentally impacting on learners (Chan and Gao, 2014), but also the value of exploring the underlying, contextually based influences on teachers' identities, as well as their biographies in terms of migration (Haim and Tannenbaum, 2022), which, it has been argued, should be discussed within teacher education programmes (Cochran-Smith, 2003;Devine, 2005). Inevitably, then, the role of teacher education in developing and influencing teachers' beliefs is a focus in scholarship, identifying teacher educators as holding the potential to help or hinder student teachers to develop through their openness towards learning (Ell et al., 2017), and highlighting the importance of the teacher practicum experience, in-school mentoring and opportunities for critical reflection on teaching practice (Gay and Howard, 2000).
Closely related to teacher beliefs are teacher identities, given the strong potential for the former to be influenced by the latter. In their review of the literature on teacher identity, Beauchamp and Thomas (2009) highlight the centrality, as well as the complexity, of identity in teacher development, with Day et al. (2006) adding that it is a concept that is constantly evolving (and even fragmenting) during the pre-service stage and on into teachers' professional lives. In this sense, pre-service teacher education is seen not merely as involving one-time input, but as the beginning of a long-term process of development of teacher competencies and identities (Smagorinsky et al., 2004) (including teacher educators, in-school mentors, wider school staff and learners), contexts (Flores and Day, 2006) and power relations (Zembylas and Chubbuck, 2018). Unsurprisingly, then, Beauchamp and Thomas (2009) argue that a greater understanding of teacher identities is essential in order to design effective teacher education programmes; shifting the focus towards the interest of this review, multicultural teacher educationalists (Cochran-Smith, 2003;Gay and Howard, 2000) would propose that understanding of teachers' cultural identities, and particularly enabling trainee teachers themselves to understand their own and others' identities, through critical reflection, is crucial to pre-service teacher education.
While the areas of teacher beliefs and identities, and the apparent influence of these two aspects on practice, are undeniably important to consider in pre-service teacher education for teaching migrant learners, what this review will reveal is that the connections are complex and sometimes problematic. This is not least the case when it comes to trainees' beliefs about, and experiences of, migration, and assumptions that might be made about the level of empathy those with migrant backgrounds do or should show towards their learners. I will return to these concepts later.
---
Scope of this review and search strategy
This research review is a critical qualitative review (see Newman and Gough, 2020) focused on published articles that present research, reviews of teaching interventions and curriculum/practice reviews on pre-service teacher education for teaching migrant learners in compulsory education. The qualitative, narrative approach was selected to allow space for analysis of themes that are insignificant numerically but significant in terms of themes of concern or interest, without generalisation as an aim. A critical lens was applied to research in terms of using a selective, purposive sample of articles, with a significant focus on thematic analysis that 'goes beyond description' (Grant and Booth, 2009: 94) towards highlighting the crucial issues in pre-service teacher education for migrant learners. The common threads of identities and beliefs emerged during the review process. The review is based on articles in English-language journals only, as this is the only language in which I am fully fluent.
The search strategy loosely followed the stages outlined by Newman and Gough (2020). I did an initial general search based on Google and Google Scholar, looking for teacher education and (im)migrant education to find the key search terms. I then turned to academic databases, beginning with Scopus. I did a Boolean search of titles, abstracts and keywords, with my search terms refined to 'teacher education' OR 'teacher preparation' OR 'teacher training' OR 'teacher instruction' AND 'migrant' OR 'immigrant' OR 'asylum seeker' OR 'refugee'. I also limited the search to research articles, and excluded books and book chapters (although the two latter sources were used in the literature review and analysis and discussion). I limited neither the time period nor the country focus. This generated a list of 142 articles. Once I had skim-read the titles of these articles, I limited the disciplines to arts and humanities and social sciences. I also excluded articles that mainly focused on learners who are second-generation immigrants, who were born in the country where the study was conducted, as this would have broadened the scope beyond what could be achieved usefully in this article. I also excluded articles that focused solely on continuing teacher education, although I included those that referred to both, or where it appeared that in-service teachers may not have previously taken part in a programme of initial teacher education. I decided to include some other articles that mentioned diversity but not migrant/immigrant/refugee/asylum-seeker learners in the title, as, once I had read them in more depth, I discovered that they nevertheless included a discussion of migrant learners (as I had defined them). I also used the references in the selected relevant articles to find articles that did not appear on Scopus. Finally, I searched Taylor and Francis, Wiley and Elsevier journal websites, as well as the digital library ERIC, to fill in any remaining gaps. Such inclusions and exclusions are indicative of how challenging it was to unearth the 'right' articles, given that migration issues can sometimes be somewhat 'buried' in discussions around diversity more broadly and definitions of 'migrant', 'immigrant' and 'ethnic minority', among many other terms, can differ, overlap and diverge in different ways in different countries.
The findings were synthesised and presented in narrative form, employing thematic analysis to identify and report on key themes (Grant and Booth, 2009). Two themes emerged inductively and are explored in the findings and synthesis section, supported by the wider literature in teacher education to enhance the subsequent discussion.
---
Findings and synthesis
This review found 26 relevant journal articles published in English between 2002 and 2021. The list of papers included in this review can be found in Table 1. While the majority were based in countries in the Global North where the English language dominates, there was also a small number of studies from a range of non-anglophone European countries (8 articles), as well as 1 study in Africa and 2 in Asia. The country focuses and the number of articles were as follows: USA (10 articles), Sweden (2), Australia (2), Cyprus (1), Canada (2), Finland (2), Hong Kong (1), Ireland (1), Kenya (1), Northern Ireland (1), Portugal (1), Thailand (1) and Spain (1). The studies were a mixture of qualitative or quantitative research projects and reports, evaluations and supportive accounts of specialist training initiatives and practice.
Two themes emerged from the review and are considered in turn below: trainee teachers' beliefs in the context of societal (non-)diversity; and teacher identities and empathy.
---
Trainee teachers' beliefs in the context of societal (non-)diversity
The role of teachers in the education of migrant learners is generally held to be crucial (UNESCO, 2019). However, teaching and teacher education do not exist in a bubble, but are influenced by many personal, historical, political and institutional factors, as well as by dominant societal attitudes such as racism and xenophobia (Anderson, 2007). Teachers also hold their own beliefs, shaped over time through life experiences and cultural backgrounds (Osler and Starkey, 2010), and sometimes altering during their training and practicum experiences (Rodríguez-Izquierdo et al., 2020). Therefore, it was unsurprising to discover that the research articles reported a mixture of attitudes held by pre-service teachers towards migrant learners. Chan and Gao (2014) explored the views of pre-service teachers in Hong Kong towards 'newcomer' children from mainland China. In their study, student teachers expressed a range of views on teaching such children, with the majority viewing them from a 'deficit' standpoint and as presenting 'a serious professional challenge' (Chan and Gao, 2014: 140). This revealed the influence of prevalent negative societal stereotypes of mainland Chinese people, transmitted through the media. While in some contexts this might translate as racism, given that both Hong Kong Chinese and mainland Chinese share an ethnicity, in this case it could be more accurately described as xenophobia or cultural discrimination. More positive attitudes were reported by Níkleva and Ortega-Martín (2015) in the context of a study on undergraduate education students on an undergraduate teacher education programme in Spain, a country which, at the time of the research (the 2010s), had already culturally diversified in a significant way. Therefore, the authors regarded this as having had a positive impact on students' attitudes: 'the experience of having had immigrant classmates is viewed as culturally enriching, which vastly facilitates their encounters with multicultural students in their future profession' (Níkleva and Ortega-Martín, 2015: 315). In the context of Ireland, a country which in the 2000s was considered to be relatively new to immigration, Leavy (2005) indicated that student teachers were inexperienced with cultural diversity due to lack of exposure. Despite this lack of exposure, the study reported high levels of tolerance and support expressed towards religious, cultural, sexual and language diversity, evidenced by trainees taking on the role of 'advocate' for language diversity in the classroom. Thus, the lack of exposure to diversity in this regard did not seem to negatively influence student teachers' views. Overall, then, the studies in this review demonstrate a range of beliefs and a range of suggested explanations for such attitudes: societal diversity was less of a determinant of positive attitudes towards migrant learners than might be expected, xenophobia can at least in some cases be a factor, and anxiety over lack of training was a bigger barrier to trainees' positive beliefs about migrant learners.
The finding that pre-service teachers hold negative beliefs about migrant learners is nothing new (Devine, 2005). From a sociological and political perspective, schools and universities represent sites of socialisation of children and adults into the kind of people that society deems acceptable (Apple, 2014). If a country's diversity and its media reporting on this topic do not unfailingly determine trainee teachers' beliefs about migrant learners, this leaves space for consideration of the role that teacher education can play in influencing such beliefs. Indeed, if one assumes that teachers' beliefs about migrants impact on practice, which can then have a significant impact on how migrants feel in class (Hanna, 2020(Hanna, , 2022;;Mendenhall et al., 2017), then it is understandable that teacher education is looked to as the vehicle by which to effect change in this area. However, how exactly to do this -what policies, pedagogies, curricula and training are needed -is not agreed upon.
---
Teacher identities and empathy
It is often suggested that teachers as a workforce do not reflect the diversity of identities held by their students, and that this is problematic for being able to empathise with students (Bhopal and Rhamie, 2014;Gay and Howard, 2000;Goodwin, 2017). In this research review, empathy, understood as stepping into the shoes of another person in order to better understand them (McAllister and Jordan Irvine, 2002), was overwhelmingly recommended as a significant resource for pre-service teachers to draw upon, and it was argued that this practice was facilitated when trainees had migrant backgrounds and experiences themselves. Ginsberg et al.'s (2018) research in the USA with student teachers and teacher educators focused on a pre-service teacher education programme which involved migrant trainee teachers working with Hispanic learners. The authors claimed that trainees' ability to understand the experiences of migrant learners made them more empathetic, and therefore effective in supporting such learners, as they were able to better use a 'pedagogy of recognition' (Ginsberg et al., 2018: 251) whereby the student teachers actively sought, through their teaching approach, to recognise and relate to the learners' backgrounds, and to offer a deeper sense of respect for them. Empathy was also deemed useful when extended to experiences of discrimination, and it was cited as a motivator for migrant teachers in their work. Significant here, however, was not only the fact that the teachers were migrants themselves, but that they shared a specific, Hispanic background. Similarly, in Naidoo's (2009: 269) analysis of an after-school homework club for refugees in Australia, they reported that trainees decided to take part in this intervention precisely due to 'personal experience of racism and its deleterious impact on learning'. Unsurprisingly, then, these and several other studies called for increased support and, in some cases, recruitment of migrant teachers, so that teachers could offer this level of empathy and recognition to migrant learners, such as in the case of Burmese teachers teaching the Burmese curriculum to students from Myanmar (Burma) in Migrant Learning Centres in Thailand (Tyrosvoutis et al., 2021). Indeed, taking the need for empathy further, it is, for some, the knowledge of migration that underlies this empathy that should also result in action on the programmatic level. In Vellanki and Prince's (2018) interesting 'collaborative autoethnography', based in the USA, the authors, themselves from migrant backgrounds, reflected on a global teacher education course on which one of them studied and one of them taught, and argued that such expertise should be taken into account when designing and modifying such courses. Thus, again, the importance of personal identities and experiences in informing a teacher's beliefs on migration diversity is underlined (Silverman, 2010).
However, several authors argued that where trainee teachers were not from a migrant background, they could develop empathy through pre-service teacher education programmes. In their narrative study on the experiences of three teacher educators/leaders in Canada in the context of attempting to integrate Syrian refugee learners, Gagné et al. (2017) suggested developing culturally relevant pedagogy as part of culturally relevant education (Ladson-Billings, 1995), through engaging trainees in the power of sharing stories from refugee learners and a deeper understanding of the learners' lives. Similarly, outside the university or college setting, it was argued that empathy could be achieved through a teaching practicum or in-school placement, often a compulsory part of pre-service teacher education, and seen as crucial to the broader development of teachers (Darling-Hammond, 2000;Lesko and Bloom, 1998). Wellman and Bey's (2015) article on an art education intervention that involved trainee teachers argued that this kind of initiative was essential to enabling pre-service art teachers to encounter and learn to work more effectively with refugee students. Tilley-Lubbs (2011) studied a service-learning project in an immigrant community in the USA that placed students with Spanish-speaking families. Focusing on the experiences of student teachers, they concluded that a service-learning project can be transformative for the teachers, offering 'an effective pedagogy to develop an awareness of students' worlds away from school' (Tilley-Lubbs, 2011: 104).
Critically, however, while the majority spoke with unwavering positivity about the importance and potential of developing empathy among student teachers, not all studies agree that teaching migrant learners in teacher practicum experiences, or even being a migrant themselves, would automatically lead to the development of empathy or compassion among trainees, particularly when their migration, cultural and ethnic backgrounds differed so widely. Anttila et al.'s (2018) study in Finland involved collecting the views of pre-service physical education teachers after facilitating workshops for asylum seekers, with some participants concluding that, after this experience, they had no desire to repeat it, not least because they did not see why culturally diverse content and pedagogy might be relevant to a school subject such as physical education. Racism and xenophobia may also have been a factor here, given that many of the migrant learners that the (majority White) student cohort worked with were asylum seekers from Iraq and Afghanistan. Mendenhall et al.'s (2021) study of a teacher training programme on approaches to school discipline in Kakuma refugee camp in Kenya (where many refugees come from Somalia, Uganda and Congo) showed how, even when teachers empathised with students, sharing with them their refugee background, they still sometimes used (banned) corporal punishment due to the extreme challenges of working in such a resource-constrained environment. Thus, even where empathy is encouraged and expected due to an aspect of shared identity, it may simply be out of reach for teachers under pressure. Indeed, a short teacher practicum may, at best, be simply a starting point, especially if it is not accompanied by other aspects of pre-service teacher education, such as in-school mentoring and opportunities for critical reflection on practice (Gay and Howard, 2000). Nevertheless, as Day et al. (2006) remind us, the complexity of (student) teacher identities means that they may be considered constantly evolving and under construction, throughout the pre-service period and beyond, influenced by people, contexts (Flores and Day, 2006) and power relations (Zembylas and Chubbuck, 2018). Therefore, perhaps there still remains the potential for empathy and its positive impact on practice.
---
Discussion: empathy, critical self-reflection and the role of pre-service teacher education
The findings that emerged from this research review speak to broader, critical issues within pre-service teacher education and migrant education, and the most appropriate role for teacher educators. The first theme revealed that, in terms of trainee teacher beliefs, negative views and anxieties appear to dominate the articles included in the review. These beliefs include a deficit approach that views migrant learners as lacking ability, racist or xenophobic stereotyping, and viewing working with migrant learners as much more challenging that working with non-migrants. This leads, in some of the articles, to trainee teachers being reluctant to work with migrant learners again after their first experience during the pre-service period. A number of the studies underline the risk of negative or stereotypical beliefs among trainee teachers detrimentally impacting on learners. Many articles attribute such beliefs to a lack of knowledge and training in teaching diverse learners. Some link these beliefs to the level of national/migratory diversity in a country; however, this does not seem to be a decisive factor. The second theme relates to pre-service teacher background and empathy. Here, the review reveals a strong emphasis on the value of empathy in learning to teach migrant learners. Many articles foreground the apparent advantage of being a migrant teacher oneself, both to being able to empathise and to being equipped to develop teacher education curricula, while several studies also highlight that being a migrant oneself is not essential to developing empathy with migrant learners, and that this is a skill that can be developed through the teaching practicum. Nevertheless, other articles complicate this narrative, highlighting that even holding shared experiences with migrant learners does not necessarily lead to empathy, or a desire to teach migrant learners in the future.
As mentioned earlier, while pre-service teachers hold varying beliefs and have had varying experiences concerning migrant learners, influenced by many factors both inside and outside the school environment (Devine, 2005;Haim and Tannenbaum, 2022;Rodríguez-Izquierdo et al., 2020;Silverman, 2010), it is widely accepted that teacher education can influence such beliefs and student teachers' understanding of their experiences. It was also argued that this matters because of the link between teachers' beliefs and pedagogical approaches, and how migrant learners feel in class (Hanna, 2020(Hanna, , 2022;;Mendenhall et al., 2017). Nevertheless, as Goodwin (2002Goodwin ( , 2017) ) and Allman and Slavin (2018) confirm, teacher education that directly focuses on migration issues is not consistently in place worldwide. Therefore, they call for compulsory input, mandated on the national and institutional level, on migration issues in teacher education and coursework, the teaching of culturally relevant pedagogies, and for training for teachers to become advocates for migrant learners. The implication here is that the lack of a deliberate approach can have a detrimental effect on trainees, leaving them either unprepared or disengaged from the process of learning to teach and support migrant learners, due to being disengaged from their own potentially stereotypical views on such learners, and from their own cultural identities (Cochran-Smith, 2003).
However, it was also highlighted that schools and universities represent sites of socialisation of children and adults into the kind of people society deems acceptable (Apple, 2014). This understanding is key to considerations of migrant learners and the education of their teachers, because it poses two challenges in terms of the agency of (pre-service) teachers educators and teachers: to what extent are teacher educators able to control the potentially negative impact of negative societal views on trainee teachers? And to what extent are these teachers, when they arrive in classrooms with migrant learners, able to control the potentially negative impact of these views on their learners? Just as (student) teacher identities are impacted by multiple factors (people, contexts, power relations), and are often in flux (Smagorinsky et al., 2004), so too are the identities and beliefs of those who are tasked with educating these student teachers -teacher educators are also part of this socialisation project, and may themselves be uncomfortable with encouraging critical reflection on identities or problematic societal beliefs. Thus, the challenge arises: rather than working for the socialisation of -and acceptance by -students into the status quo, can schools educate students to make changes in their society, and can teacher education and educators facilitate this? On this, Tuomi (2005: 207) is hopeful: 'The ability to adapt to societal transitions is a skill that needs to be developed in teachers ... Rather than working for socialization into the status quo, schools can foster proactive agents of social change.' The idea of school as a site for social transformation is a powerful and attractive one.
The findings also lead us to ask bigger questions about the role of empathy, troubling the notion not only of its achievability -whether that be through drawing on one's own identities and experiences of migration or discrimination, or learning it 'from scratch' through in-school teaching experiences -but also of its desirability as a goal within pre-service teacher education. As Cushner and Mahon's (2002) focus on the importance of teachers' engagement with diversity implies, particularly for those trainees from non-migrant backgrounds who cannot realistically empathise with migrants from experience, a short teacher practicum cannot be more than one part of a much longer term, perhaps even lifelong, commitment to developing as a culturally responsive educator. Critical self-reflection was a skill that often appeared to be mentioned as complementing empathy in the articles in this review (see, for example, Chan and Gao, 2014;Morita-Mullaney and Stallings, 2018), and is often cited as part of culturally relevant (teacher) education (Aronson and Laughter, 2016;Ladson-Billings, 1995). Scorgie (2010: 699) highlights that critical self-reflection involves transformation, including the 'disorienting dilemma' that requires learners to 'confront and evaluate their underlying beliefs and assumptions using both personal reflection and reflective discourse with others', leading to empathy. Such an approach may require, in Zembylas and Papamichael's (2017: 3) view, the use of 'pedagogies of discomfort' within multicultural teacher education, whereby the discomfort of student teachers when dealing with challenging topics might be harnessed in order to challenge 'dominant beliefs, habits and normative practices that sustain stereotypes and social injustice [thereby] creating openings for empathy and transformation'. This may be especially important where trainees lack intercultural or multicultural experience (Guo et al., 2009). In terms of migrant learners, wider scholarship has highlighted the importance of learning to reflect in a personal way on diversity, racism and internalised notions such as 'colour-blindness' (Gay and Howard, 2000). However, as Dorner et al. (2017) have highlighted, students often struggle to appreciate the complexity of identities, even their own, and some show resistance to being challenged, as was seen in the findings in this article, and elsewhere in the literature (for example, Aronson and Laughter, 2016). This can stem from many aspects of a pre-service teacher's institutional environment, and their professional and personal life and beliefs, not least the beliefs they have about themselves as a 'good person' or a 'good teacher', and it is a particular concern within antiracist and multicultural (teacher) education (Bhopal and Rhamie, 2014;Ladson-Billings, 1995).
Given these beliefs, attitudes and identities, further questions emerge about how they are shaped, and the role that teacher education can and should play in shaping trainee teachers. Ball (2009: 46) suggests that teacher educators 'must assist teachers in replacing their feelings of insecurity, discomfort, and inadequacy with feelings of agency, advocacy, and efficacy'. Gay and Kirkland (2013), among many others (for example, Cochran-Smith, 2003;Gay and Howard, 2000), also propose that critical self-reflection, in addition to cultural critical consciousness, is crucial, going beyond regurgitation of course materials towards an analysis of their own beliefs and biases. Critical reflection also links to identity: as Beauchamp and Thomas (2009: 182) note, reflection is 'recognized as a key means by which teachers can become more in tune with their sense of self and with a deep understanding of how this self fits into a larger context which involves others; in other words, reflection is a factor in the shaping of identity'. Indeed, several articles in this review also recommend critical self-reflection as an essential part of teacher education, as a way of helping trainee teachers learn how to effectively engage with migrant learners (for example, Guo et al., 2009). However, the resistance towards teaching (or changing A critical review of international research into pre-service teachers' beliefs and practices when teaching migrant learners 10 the approach to teaching) migrant learners that surfaced in some of the studies in this review presents a challenge to critical self-reflection as a 'fail-safe' strategy. While it can be facilitated by a sense of empathy with migrant learners, one cannot assume that empathy will be achieved, and one cannot even expect a teacher educator to empathise with trainees when trainees hold views that teacher educators may find abhorrent (see also Zembylas and Papamichael, 2017). So, if a teacher educator is not modelling such empathy, then surely the potential of pre-service teacher education to develop such skills in a student teacher may be limited. Perhaps, where the value of empathy might be under question, what is more important is the way pre-service teacher education approaches empathy: it may be sufficient for teacher educators to offer, in the spirit of openness and honesty, the modelling of critical self-analysis (including of culture), harnessing the power of storytelling and applying this 'pedagogy of discomfort' to themselves. If, indeed, critical self-analysis is so essential to critical multicultural teacher education for teachers of migrant learners, then surely this could be the first step.
---
Summary and conclusion
This article has offered a critical qualitative review of 26 English-language journal articles from a diverse range of countries that focus on research and practice among pre-service teachers and their beliefs about, and experiences of, teaching migrant learners. Two themes have emerged. First, the review revealed that student teachers were more likely to hold negative beliefs and anxieties about teaching migrant learners. While the diversity of the country in which the students are working did not seem to be a strong determinant of such beliefs, articles suggested that such beliefs were more often linked to lack of knowledge and training. Second, a strong belief in the value of empathy in learning to teach migrant learners emerged, with the caveat that having a migrant background did not necessarily lead to a more empathetic student teacher. The wider discussion problematised empathy as a goal, and highlighted the challenges inherent in encouraging the development of critical self-reflection among pre-service teachers.
Several issues regarding pre-service teacher education for teaching migrant learners remain. Undoubtedly, more research that focuses on teacher education specifically for learning to teach migrant learners, as well as reflections on the future of teacher education for this task, are required. Ryan et al. (2019) have reported the rise in practice-based research, and many and varied attempts to engage teachers in research and link researchers with practitioners. Similarly, Cheng and Li's (2020) recent article calls for more effective practitioner research as part of teacher professional development, and some of the articles included in this review illustrate that teacher educators/teacher education researchers and trainee teacher partnerships, in both research and writing, may offer a step in the right direction, particularly when some of the researchers/practitioners/authors have experienced migration themselves (Dorner et al., 2017;Gagné et al., 2017;Vellanki and Prince, 2018). It would be enlightening both to see more of these collaborations and also to read a more critical reflection on the experiences of such collaborations, not least as it may shed light on the issue that was the motivation for this review in the first place: why there is so little research on teacher education and migrant learners.
Two decades into this century, another 'age of migration' looks set to continue (de Haas et al., 2020). Ferfolja (2009: 405) has argued that 'In a world increasingly globalised, knowledge of diversity and understanding the extent of differences encountered in schools is pivotal to enable new teachers to effectively address students' sociocultural and learning needs and to provide an equitable and more informed classroom environment.' If, as Ryan et al. (2019: 259) hope, 'the crucial priority of preparing teachers for increasingly diverse classrooms' is to be addressed, then resources need to be funnelled towards this end. We should not, in a few years' time, find ourselves saying, as Goodwin did in 2017, when reflecting on her article 15 years prior, that 'it is troubling to find it necessary to engage in the same examination and assessment of the same issues' (Goodwin, 2017: 434). My hope is that this resourcing will allow us in a more effective way to address the most pressing challenges of pre-service teacher education for teaching migrant learners in the twenty-first century.
---
Examines focus group discussions with 17 pre-service English language teachers about their perceptions of newly arrived immigrant children from mainland China Findings reveal that (1) participants widely perceived these children as deficit and consider them a serious professional challenge; and (2) media, life and teaching practicum experiences with immigrant children were crucial in forming these perceptions Calls for teacher education programmes to involve pre-service teachers in critical engagement with the mass media and their own experiences so that they can address the deficit model applied by teachers to immigrant children.
---
Declarations and conflicts of interest
---
Research ethics statement
Not applicable to this article.
---
Consent for publication statement
Not applicable to this article.
---
Conflicts of interest statement
The author declares no conflicts of interest with this work. All efforts to sufficiently anonymise the author during peer review of this article have been made. The author declares no further conflicts with this article. |
The social determinants of health (SDH) are factors that can influence the distribution of rates for acquired immunodeficiency syndrome (AIDS) in a given region. The objective of this study was to analyze SDHs related to AIDS. Method: Ecological study, using spatial analyses techniques. 7,896 disease case reports were analyzed over a period of 11 years. Subjects were 13 years or older and residents of the state of Ceará, in the northeast of Brazil. The area of analysis was the municipality, calculating both the average rate of AIDS and the Freeman-Tukey transformed average rate for measuring softening. We used the Simple Linear Regression Model to make the spatial correlation between AIDS detection rates and SDH. A Geographic Information Systems (GIS) was used to manipulate georeferenced data. Results: High rates of AIDS could be found in cities with better living conditions. Additionally, there was a significant relationship between primary health care coverage and lower rates of the disease in Ceará. Conclusion: Socioeconomic indicators with statistically significant correlation to the distribution of AIDS should be targeted by strategies policies in the fight against the disease. | INTRODUCTION
Over 30 years ago, a deadly disease that would initially affect men, homosexuals, young and healthy people emerged. At that time, one could not imagine how much AIDS would provoke discussion of complex issues such as human rights and social issues 1 .
Although the overall increase in the distribution of antiretroviral therapy (ART) contributed to the 48% decline in AIDS-related deaths 2 in Brazil, more than 880,000 cases of the disease were detected in the country from 1980 to June 2017, with an annual average of 40,000 new cases and a gradual fall in the detection rates of the disease in recent years. However, this is not the case in the Northeast of the country, where there was a linear trend of growth in AIDS detection rates, with an increase of 35.7% between 2006 and 2016 3 .
In this context, it is plausible to affirm that socioeconomic inequalities can lead to inequalities in health. In some countries, the mortality of the general population varies according to the socioeconomic situation of the localities. With regard to AIDS, late diagnosis can occur in economically disadvantaged regions 4 , leading to an increase in opportunistic diseases and early deaths.
There are many variables related to the health/disease process, including social and economic status, education, employment, housing and physical and environmental exposure. These factors affect health and may influence the increase in morbidity rates. Studying the social determinants of health (DSS) is important, especially in countries characterized by large economic and health disparities, such as Brazil, and it is possible to introduce public policies that integrate health, social and economic actions 5 .
Stigma, discrimination and homophobia are examples of conditions that increase the chances of developing diseases 6 .
RESUMO: Introdução: Os determinantes sociais de saúde (DSS) podem influenciar na distribuição das taxas da síndrome da imunodeficiência adquirida (AIDS) de uma região. Este trabalho teve o objetivo de analisar os DSS que se relacionam com a AIDS. Método: Estudo ecológico com técnicas de análise espacial. Analisaram-se 7.896 notificações dos casos da doença em um período de 11 anos, cujos indivíduos possuíam idade igual ou superior a 13 anos e eram residentes no Estado do Ceará, Região Nordeste do Brasil. A unidade de análise foi o município, calculando-se a taxa média de AIDS e a taxa média transformada de Freeman-Tukey para a suavização das medidas. Foi feita correlação espacial das taxas de detecção de AIDS com os determinantes sociais de saúde, utilizando-se o modelo de regressão linear simples. Empregaram-se os sistemas de informações geográficas (SIG) para manuseio dos dados georreferenciados. Resultados: Altas taxas de AIDS foram encontradas em municípios que apresentaram melhores condições de vida. Observou-se relação significativa entre cobertura da atenção primária em saúde e baixas taxas da doença no Ceará. Conclusão: Os indicadores socioeconômicos com correlação estatisticamente significativa com a distribuição da AIDS devem servir de base para políticas de combate à doença.
Palavras-chave: Análise espacial. Síndrome da imunodeficiência adquirida. Determinantes sociais de saúde.
---
REV BRAS EPIDEMIOL 2019; 22: E190032
There is, therefore, a need to address SDH in order to achieve equitable health outcomes 7 . Responsibility for social indicators that affect AIDS rates calls for a diversified workforce, whose actions are focused on broad access to quality health care, with resources for all populations 8 . Also, understanding the relationship between the health behaviors adopted by individuals and the characteristics of the places where they live is essential for the understanding of SDH 9 .
The relevance of the study is related to the need to know the main SDH for AIDS in Ceará, with a 42% increase in disease detection rates in the state from 2006 to 2015 10 . Despite the great efforts of the government, current prevention measures require considering the socioeconomic reality that interferes with the health/disease process of AIDS, in order to make effective its actions to control the epidemic in the region.
In view of the above, the study aimed to analyze SDHs that are related to AIDS.
---
METHOD
An ecological study was carried out, and the unit of analysis was the municipality. Spatial analysis techniques were used and geographic information systems (GIS) were used as data manipulation tools.
Located in the Northeastern Region of Brazil, the state of Ceará is divided into 184 municipalities and has an approximate area of 148,886.3 km 2 . The estimated population in 2015 was 8.9 million inhabitants, with a human development index (HDI) of 0.682 11 .
The study included all individuals aged 13 years old or older (age range used in the definition of AIDS cases in adults, for reporting purposes), living in Ceará and reported with AIDS in the period from 2001 to 2011, totaling 7,896 notifications.
The AIDS notification forms of the Disease Notification System (Sistema de Informação de Agravos de Notificação -SINAN) were used, whose information was provided by the Health Department of the state of Ceará (SESA).
The socioeconomic variables of Ceará were obtained from the last Demographic Census available in the country, conducted in 2010 by the Brazilian Institute of Geography and Statistics (IBGE). Data were transformed into rates and proportions and their values, aggregated by municipality. Unemployment rate, Gini index (used to calculate inequality of income distribution), coverage of the Family Health Strategy (FHS) service and coverage by the Community Health Agents Program (Programa de Agentes Comunitários de Saúde -PACS) were information provided by the Department of Informatics of the Unified Health System -SUS (DATASUS) -and by the Department of Primary Care. The collinearity between the socioeconomic variables was evaluated by the calculation of the variance inflation factor (VIF).
The mean AIDS detection rate for the period and the municipality was calculated from the sum of the rates calculated per year divided by the number of years studied, using the population data available on the IBGE website as the denominator. The Freeman-Tukey (FT) transformed rate was calculated in order to reduce the variations of detection rates with very small values and to allow the identification of spatial patterns 12 . This rate was considered as a dependent variable and used for correlation with SDH.
---
REV BRAS EPIDEMIOL 2019; 22: E190032
The Pearson test was used to verify the statistical correlation between the dependent variable (AIDS detection rate) and covariates (socioeconomic indicators). The Shapiro-Wilk test was used to measure the normality of the dependent variable. For all of the study's tests, alpha below 0.05 was considered necessary for rejection of the null hypothesis, being this the independence of the values of AIDS rates in relation to the socioeconomic indicators of the region.
For the creation of thematic maps, a shapefile type vectorial cartographic base was obtained from the IBGE website, containing polygons that delimit the political divisions of Ceará by municipality. The neighborhood matrix employed was the contiguity criterion.
The Moran index was used to verify the spatial correlation between neighboring areas and the Jarque-Bera index to test the normality hypothesis of the residues.
Spatial analysis was performed using the global spatial regression method, the simple linear regression model (SLRM). The model allows us to identify whether the explanatory variables tested remain associated with the response variable, considering the influence of socioeconomic and demographic factors on their spatial distribution.
The residuals generated by the MRLS modeling were analyzed, which should be free of spatial autocorrelation, not presenting clusters. The absence of spatial autocorrelation in the model residuals reveals a random spatial pattern for the specified model, indicating good modeling fit.
We used the SPSS 20.0 and ArcGis 10.
---
RESULTS
---
CHARACTERIZATION OF THE STUDIED POPULATION
In Ceará, there was a progressive increase of the disease in the studied population, which went from 7.69 cases per 100,000 inhabitants in 2001 to 14.14 cases per 100,000 inhabitants in 2011. Most of the cases were detected in the male population -corresponding to almost 67% of total notifications -of brown coloration (80%) and in the age range of 30 to 39 years. The highest detection rates were concentrated in the capital of Ceará (Fortaleza) and its surroundings.
---
PEARSON'S CORRELATION
The bivariate analysis showed a significant association between AIDS rates and the majority of social indicators (Table 1), except for Gini index, unemployment rate and proportions of owned households, households without sanitary sewage, households with open sewage, and semi-adequate and inadequate households (p > 0.05).
---
REV BRAS EPIDEMIOL 2019; 22: E190032
The coverage by FHS and PACS (r-Pearson = -0.17 and -0.21, respectively) presented an inversely proportional relation (p = 0.0240 and 0.005, respectively), that is, municipalities with high coverage of primary care had lower disease rates.
Also, according to the bivariate analysis, most of the socioeconomic indicators that indicate the better living conditions of the studied population showed a direct and significant relationship with the values of AIDS rates, especially income averages (r-Pearson = 0.37 and p = 0.000), the proportions of households with water connected to the general network (r-Pearson = 0.17 and p = 0.0239), households with sanitary sewage (r-Pearson = 0.20 and p = 0.0056), households with more than three restrooms (r-Pearson = 0.36 and p = 0.000), adequate households (r-Pearson = 0.19 and p = 0.0086) and the proportion of female respondents (r-Pearson = 0.27 and p = 0.0002). We also identified a direct and statistically significant relationship between the response variable and the proportion of rented households (r-Pearson = 0.26 and p = 0.004).
---
SPATIAL CORRELATION
The spatial correlation of the socioeconomic indicators, obtained by the Moran index (Table 2), showed the spatial dependence of the great majority of the variables (p < 0.05), especially indicators related to income, households with three or more restrooms and households with an illiterate person in charge (Moran index > 0.3).
The bivariate analysis of the transformed AIDS rate and socioeconomic indicators, also using the Moran index, showed a positive value and statistical association of the following indicators: proportion of households in the poverty line, of individuals considered poor (with a lower per capita family income or equal to half a minimum wage), illiterate people in charge, illiterate females in charge, households without a restroom and male individuals I charge of the household (Table 3).
The application of the MRLS showed statistical and inverse significance between the transformed rate of AIDS and the FHS coverage (T-Statistic = -14.85 and p = 0.000). However, significant and direct relation to the average of household members, the proportion of households with three or more restrooms, the proportion of illiterate female people in charge of the household, and the average per capita household income (Table 4) were verified. The rates of AIDS were higher when the values of these variables had larger proportions.
The spatial autocorrelation of the adjusted values of the AIDS rate was significant (Moran index = 0.61 and p <0.001). The mean income indicator, however, showed a weak influence in the model, with a lower coefficient verified in the regression (T-Statistic = 0.6461). The FHS coverage, however, was shown as an indicator of greater influence (T-Statistic = -14.8527) (Figure 1).
The multivariate analysis showed an adjusted coefficient of determination that explained 28.23% of the variability of the AIDS rate. One can observe the random spatial distribution of the residues (Figure 1), which, by the Moran index, was not significant (p = 0.155). A normality test was carried out to verify whether there were residues with normal distribution. The distribution of the residues was approximated to the normal curve. In this way, the Jarque-Bera index = 4.8 was obtained, with p = 0.09.
---
DISCUSSION
This study carried out the analysis of multiple socioeconomic determinants in the occurrence of AIDS. Social inequalities in the State of Ceará define inequalities in the pattern of REV BRAS EPIDEMIOL 2019; 22: E190032 viral load between the untreated individuals in the Northeast Region compared to the Center-South Region 16 .
It is considered, therefore, that the area of the studied municipality is heterogeneous and may present greater variability in the distribution of indicators and, consequently, in AIDS rates.
In addition, it is important to consider that the disease is prevalent in large cities of the state, where the family income is highest. Thus, AIDS may be predominantly more associated with the pace and risk behaviors of modern and urban life than with poverty-related factors. The 2010 Census found an improvement in the country's social indices compared to previous years. This result, however, diverges among Brazilian regions. The Northeast, which contains Ceará, has the highest illiteracy rate in the country, with 17.6%, as opposed to the South Region, with only 4.7% 17 .
Ceará itself does not have equitable socio-economic indexes in its geographic space, since more than 75% of the literate individuals live in the urban area 11 . With regard to AIDS, however, previous studies have not identified a higher level of schooling with knowledge about the disease, let alone with behaviors for the prevention and control of illness 18 .
The increase in disease rates, verified not only in large urban centers, but also in small municipalities, is a factor that assists managers and health professionals in the planning of strategies for the control of the syndrome 19 .
A direct association between AIDS and rented property was observed. One study also found a relationship between stable homelessness and difficulties in accessing medical care and adherence to antiretroviral treatment for people with HIV/AIDS 20 . In addition, it can be said that the constant change of residential address can increase the network of sexual partners, increasing the probability of contact with infected partners.
The positive association between per capita income and AIDS rates, evidenced between the two variables in this study, was explained in a previous study, which identified ease and greater access to diagnostic tests and serological testing in territories with better economic conditions 18 . Still on regional disparities, the important elements in HIV/AIDS care should be socioeconomic and health deficiencies among low-income countries, while in rich countries aspects to be considered are clinical, psychosocial and sexual identity issues 21 .
There was a significant relationship between primary health care coverage (FHS and PACS) and low rates of AIDS in Ceará. It is possible to affirm that the internalization process of AIDS was accompanied by the expansion of primary health care in different municipalities of the state. This fact, associated to public policies to combat AIDS, which defines strategies for health promotion, prevention and early diagnosis, may have contributed to the low rates of disease in regions with greater coverage of FHS and PACS teams. FHS reduces geographic barriers by acting near the household of the people under its responsibility. Spatial disparities define geographical access and the effectiveness of interventions in health institutions 22 .
The use of spatial correlation and SLRM allowed the identification of socioeconomic characteristics related to the difference in AIDS rates found in the state of Ceará. The contemporary use of spatial modeling tools has allowed the formulation of intervention strategies, integrating public health with other sectors 23 .
Despite the important results found, it is important to mention the limitations of the present study, which are related to the low quality verified in the registry of some data obtained and the underreporting of the cases in the state. Potential damage caused by poor record quality and underreporting of cases would conceal the actual disease situation in Ceará. However, SINAN, used in the present study, was considered the most adequate source of data to reach the defined objectives, due to the large amount of information contained in the system. On the other hand, the integration of data provided by other systems that also manipulate information related to HIV/AIDS, such as the Laboratory Examination Control REV BRAS EPIDEMIOL 2019; 22: E190032 System (Sistema de Controle de Exames Laboratoriais -SISCEL) and the Logistic Control System of Medicines (Sistema de Controle Logístico de Medicamentos -SICLOM), could have collaborated to decrease under-reporting of disease cases.
Also, because it is an ecological study, it is not possible to make individual inferences regarding the results. Such restrictions, however, did not compromise the main findings and the relevance of the research, since the objective was to identify the socioeconomic indicators that interfere in the detection rates of AIDS in the population of Ceará.
---
CONCLUSION
AIDS rates were higher where there were better living conditions. It was observed that sites with greater coverage of FHS and PACS have lower rates of AIDS detection. It can be concluded that social disparities can lead to different vulnerabilities of the disease in the same geographic territory.
This research may contribute to the understanding of the relationship between SDH and AIDS. This way, the political and assistance actions to control the epidemic can be directed to the most relevant SDH. Identifying social elements that affect the health/disease process of AIDS allows directing the planning of actions, both at the macro level, in the establishment of public health policies and programs of care, and at the level of less complexity, in the care context of individuals affected by the infection.
The techniques of autocorrelation and spatial analysis adopted, using GIS resources, were very useful to verify the epidemiological patterns of the distribution of a certain disease and its relation with other factors characteristic of the geographic space. This technology can be replicated in AIDS studies in other locations, as well as being useful for revealing epidemiological patterns of other diseases.
Due to the social inequalities between municipalities, it is recommended to carry out researches considering smaller units of analysis and comparative investigations between different territories, in order to better understand the dynamics of SDH in different locations. It is also recommended to apply this study design to AIDS rates transformed by gender and age group to evaluate if the impact of SDH in different subgroups occurs unequally.
---
REV BRAS EPIDEMIOL 2019; 22: E190032 distribution of AIDS. Differentiated socioeconomic indicators among municipalities are reflected in localities with no or few reported cases, in detriment of other sites with high disease rates.
The present investigation evidenced high rates of AIDS in places with better living conditions, corroborating an earlier study, which also identified higher disease rates among the residents of wealthier households 13 . Previous research, however, has suggested the influence of national per capita gross domestic product (GDP) and the Gini index on reducing the incidence rate of HIV/AIDS 14 . Another Brazilian study observed the current challenge of spreading the epidemic among poorer people living in certain regions of the country 15 . This fact points to the influence of the heterogeneity of Brazilian regions and states on the epidemiological behavior of certain diseases that suffer from their SDH. A mapping study of the circulating volume of HIV/AIDS in the country also revealed the heterogeneity of the infection among Brazilians, with areas of concentration of the community |
This study investigated associations between components of physical activity (PA; e.g. domain and social context) and sedentary behaviors (SBs) and risk of depression in women from disadvantaged neighborhoods. A total of 3645 women, aged 18-45 years, from disadvantaged neighborhoods, self-reported their PA, SB and depressive symptoms. Crude and adjusted odds ratios and 95% confidence intervals were calculated for each component of PA, SB and risk of depression using logistic regression analyses, adjusting for clustering by women's neighborhood of residence. Being in a higher tertile of leisure-time PA and transport-related PA was associated with lower risk of depression. No associations were apparent for domestic or work-related PA. Women who undertook a small proportion of their leisure-time PA with someone were less likely to be at risk of depression than those who undertook all leisure-time PA on their own. Women reporting greater time sitting at the computer, screen time and overall sitting time had higher odds of risk of depression compared with those reporting low levels. The domain and social context of PA may be important components in reducing the risk of depression. Reducing time spent in SB may be a key strategy in the promotion of better mental health in women from disadvantaged neighborhoods. | Introduction
Participation in regular physical activity (PA) [1] as well as reducing sedentary behaviors (SBs) such as television (TV) viewing [2] has a strong cardioprotective role. However, recent research has indicated that these behaviors may also play an important role in the treatment and prevention of depression [3]. Depression is the world's most incapacitating illness [4], with nearly 20% of women from developed countries suffering from depression within their lifetime [5]. Several population groups have been found to be at a greater risk of depression, including women [6] and adults of low socioeconomic position (SEP) [7]. These population groups are also at increased risk of physical inactivity [8,9], highlighting the importance of research that focuses on those target groups in order to improve mental health through the promotion of healthy lifestyles (i.e. increasing PA and reducing SB).
Much research has indicated the beneficial effect of PA on the risk of depression [10]. However, little is known about the specific characteristics of PA that are most beneficial to mental health, for example the domain and social context in which PA occurs. Although various observational [11] and intervention [12] studies have found leisure-time PA to be inversely associated with depression among women, few studies have assessed the association with PA undertaken in other domains (e.g. work related, domestic and transport related). Until now, only three observational studies had specifically compared the association between PA in various domains and risk of depression in women [11,13,14]. All three studies concluded that leisuretime PA was inversely associated with risk of depression. One of the three also found an association in the opposite direction between domestic (household) PA and risk of depression [13], and another demonstrated a positive association between workrelated PA and risk of depression [14]. No associations were evident between transport-related PA and risk of depression in women [11]. However, that study did not distinguish between different types of transport-related PAs (e.g. walking or cycling) which may be an important factor [11].
Similarly, the association between PA undertaken in different social contexts and risk of depression has received very little research attention. Only one observational study has considered the social context of PA and its association with risk of depression in women [11]. That study found that being active with a family member was associated with lower odds of depression, compared with never being active with a family member. Conversely, two intervention studies have compared the effects of differing social contexts of PA on depressive symptoms in women [15,16]. Both interventions compared individual (home-based) PA programs with group-based (accompanied) activity programs and found significant effects of both formats in reducing participant's depressive symptoms, with no clear benefit of either format over the other. However, one of those studies included a small sample size as well as a short follow-up period which limited results [16].
Recently research attention has focused on the association between SB (e.g. TV viewing and computer use) and depression, but this remains poorly understood. Most observational studies have found positive associations between time spent in SB (e.g. TV and computer use) and risk of depression [3,[17][18][19][20][21]. In contrast, two intervention studies assessing the risk of computer or Internet use and risk of depression found inverse associations between computer use and depression [22,23], suggesting that time spent on the computer may reduce risk of depression. Only one study has assessed whether the relationship between SB and depression may be moderated by PA [3]. That longitudinal study found lower levels of SB to be associated with reduced risk of depression when PA levels were low, yet it was not a critical aspect when PA levels were high [3].
The purpose of the current study was to examine the associations between components of PA (e.g. domains and social context) and risk of depression as well as the association between SBs (e.g. TV viewing and computer use) and risk of depression using data from a large population-based sample of women living in socioeconomically disadvantaged areas. Furthermore, the study aimed to test for the presence of an interaction between PA, SB and risk of depression. It was hypothesized that leisure-time PA would be more strongly associated than other domains of PA with lower risk of depression and that activities undertaken in a social context (i.e. PA with somebody) would be more strongly associated with lower risk of depression, compared with PAs undertaken alone. It was also hypothesized that SBs such as TV viewing and sitting at the computer would be associated with higher risk of depression. Finally, it was hypothesized that the positive association between SB and risk of depression would be stronger among women doing none/low levels of PA than those who were highly active.
---
Methods
Analyses were based on cross-sectional survey data collected in 2007-08 from the Resilience for Eating and Activity Despite Inequality (READI) study. Data used in the present analyses were provided by 3645 women living in socioeconomically disadvantaged areas of Victoria, Australia, aged between 18 and 45 years. Methods have been described in detail elsewhere [24] and are summarized below.
---
Participants
Participants were randomly recruited from 80 Victorian neighborhoods (suburbs; 40 rural and 40 urban) of low SEP, based on the Australian Bureau of Statistics Socioeconomic Index for Areas [25]. The PA and depression electoral roll was then used to randomly select approximately 150 women from each of the 80 suburbs, aged between 18 and 45 years.
Surveys were sent to a sample of 11 940 women, and a total of 4934 women returned a completed survey, representing a response rate of 45% [24]. Of the respondents, 571 women were excluded due to residing in 'non-READI' neighborhoods. A further nine women were excluded due to falling outside the valid age range (i.e. either younger than 18 years or older than 46 years, or had data missing on this variable). Three women were excluded as the survey was not completed by the woman it was addressed to and two women later withdrew from the study. This left a total of 4349 women included in the overall study. Since pregnancy is likely to affect both PA levels [26,27] and risk of depression [28], 284 women (6%) were excluded from analyses because they reported being pregnant, did not know their pregnancy status or did not complete this question. A further 420 women (10%) were excluded due to having missing data on one or more covariates. This left a total of 3645 women (74% of the original respondents) with data for inclusion in the analyses.
---
Procedures
The study was approved by the Deakin University Human Research Ethics Committee. Women were sent a pre-survey letter in the mail, informing them that they had been selected to take part in a study on women's health and that the survey would be sent to them shortly. Surveys were posted 1 week later. Following the Dillman protocol [29], non-respondents received a mailed reminder 2 weeks later and a second reminder with a replacement survey a further 2 weeks later. Women received small incentives (e.g. tea bags and $1 scratch lottery tickets) with their initial survey package. Written consent to participate was obtained from all respondents.
---
Measures
---
Domain of PA
Self-reported PA was measured using the longform self-administered version of the International Physical Activity Questionnaire (IPAQ-L), a validated and reliable measure involving a 7-day recall of PA behaviors [30]. Questions included the frequency and duration of time spent undertaking various intensities (walking, moderate and vigorous) of PA in leisure time, transport-related activity, work-related activity and domestic PA. For each of these four domains, participants were required to estimate the number of days, hours and minutes they spent undertaking such activities in the past week.
The total duration of PA was calculated for each variable by multiplying the frequency of activities by the duration within each domain. Further, leisure-time and work-related PA variables were summed across intensities (walking, moderate and vigorous) and transport-related PA was summed across activities (walking and cycling) to give a total duration of PA within each domain. Total (global) weekly duration of PA across all domains was also calculated. Because of the skewed nature of the distributions and the large proportions of women reporting no PA on several variables, each continuous PA variable was transformed into a categorical variable with three levels based on the tertiles within the respective distributions.
---
Social context of PA
The social context of leisure-time PA was assessed through the following question, developed for this study: 'Thinking about all of your walking, moderate and vigorous leisure-time PA in the last 7 days, about how much of this was done ON YOUR OWN (as opposed to with someone else like family, friend or in an exercise group or class)?'. Response categories included: all, most (about three-fourth), about half, a little (about one-fourth) and none. The reliability of this measure was tested and found to be adequate (Kappa value = 0.625) [31].
---
Sedentary behavior
Three measures of SBs were included in the survey: time spent sitting at a computer, time spent sitting watching TV and overall time spent sitting. Time spent sitting watching TV and time spent sitting M. Teychenne et al.
using the computer were examined separately. Participants were asked to estimate the number of hours and minutes they spent undertaking those activities on a usual weekday, as well as a weekend day. Overall sitting in the past week was assessed using the IPAQ-L. Participants were asked to estimate the number of hours and minutes spent sitting on a usual weekday, as well as a weekend day. These measures have been found to be reliable and valid in an Australian adult population [32].
Computer time, TV viewing time and sitting time were each summed to give a total weekly duration of time spent usually undertaking each of those SBs. This was done by multiplying the duration of each SB performed on weekdays by 5 (days) then adding this to the weekend days total duration [duration multiplied by 2 (days)]. The variable 'weekly screen time' (TV viewing + computer use) was created by summing the weekly duration for the variables 'TV viewing' and 'computer use'. Each continuous SB variable was then transformed into a categorical variable based on the tertiles of the distribution.
---
Depressive symptoms
Depressive symptoms were assessed using the 10-item version of the Centre for Epidemiologic Studies Depression Scale (CES-D), a well-validated measure of depression [33,34] that has been used in previous studies examining the association between PA and depression [35]. It includes questions that relate to various symptoms of depression that may have been experienced in the past week, which indicate whether a woman is at risk of depression. Respondents rated themselves on a four-point severity scale. CES-D scores of 10 or greater indicated that the participant was at risk of depression [34,36,37].
---
Covariates
Self-reported age, body mass index [BMI; not overweight (<25), overweight (25-29.9) and obese (>30)], marital status, education, employment status, household income, children living at home, country of birth and physical health were included in the analyses as potentially confounding factors (see Table I), as these variables were bivariately associated with the risk of depression in chi-square analyses.
---
Statistical analyses
Demographic characteristics, PA, SB and risk of depression were initially examined using descriptive univariate analyses performed using SPSS version 14.0 statistical software. Bivariable associations between domains of PA, social context of PA, SB and risk of depression were examined using chi-square analyses. Crude and adjusted (controlling for confounding factors described earlier) odds ratios (ORs) and 95% confidence intervals (CIs) were then calculated for each of the PA and SB variables and risk of depression using logistic regression analyses. Further, logistic regression analyses were used to test for an interaction between SB (i.e. weekly sitting time), PA (i.e. total weekly leisure-time PA) and risk of depression. Logistic regression analyses controlled for clustering by neighborhood of residence using STATA version 10.1 statistical software package.
---
Results
Table I presents the sociodemographic characteristics and risk of depression among participants. The mean age of participants was 35 years. Just over half of the women (53%) were classed as not overweight. The majority of participants was born in Australia (89%) and was married/defacto (66%). A total of 1874 (51%) reported their highest qualification as completing high school or an apprenticeship or certificate/diploma. Just under half reported a weekly household income of $1500 or less and the majority of women had children living at home (62%). A total of 1328 (36%) participants were classified as being at risk of depression (according to the CES-D).
Table II shows the proportion of women at risk of depression according to PA and SB variables from chi-square analyses. Leisure-time walking, moderate and vigorous PA and total leisure-time PA were PA and depression each inversely associated with risk of depression. Although women in the middle tertile of moderate work-related PA (reporting 0.1-6 hours per week) were less likely to be at risk of depression than those reporting higher or lower durations, no association was found between work-related walking, vigorous or total work-related PA and risk of depression. No other domains of PA (transport related or domestic) were related to risk of depression.
The proportion of participants at risk of depression was higher among women who reported doing all leisure-time PA on their own, when compared with those who reported doing some proportion of their leisure-time PA with someone. Of the SB variables, risk of depression was positively associated with TV viewing time, screen time and overall sitting time but not associated with time spent sitting at the computer.
Table III shows the crude and adjusted (for confounders) ORs and 95% CIs from logistic regression models predicting the odds of risk of depression according to PA and SB variables.
---
Physical activity
Both the unadjusted and adjusted results showed that compared with those in the lowest tertile of total leisure-time PA per week (reporting less than 40 min), those in the middle and highest tertiles (greater than 40 min) had lower odds of risk of depression. When examined according to specific intensities, both the unadjusted and adjusted results indicated that compared with those who reported no walking, those who reported some walking in leisure time had lower odds of risk of depression. Results from both unadjusted and adjusted models showed that compared with women who reported no moderate-intensity leisure-time PA, women in the middle tertile (reporting between 0.1 and 1.33 hours) of moderate-intensity leisure-time PA per week had lower odds of risk of depression. Both the unadjusted and adjusted results showed that compared with those who reported no vigorous leisure-time PA per week, those in the highest tertile (reporting greater than 1.9 hours) had lower odds of risk of depression.
---
PA and depression
While moderate-intensity work-related PA was associated with risk of depression in the unadjusted model, this was no longer significant in the adjusted model. Both the unadjusted and adjusted results indicated that compared with those in the lowest tertile (reporting less than 30 min per week) of transport-related PA, those in the highest tertile (reporting greater than 2.5) had lower odds of risk of depression. When examined according to specific activities, the adjusted results show that compared with those who reported less than 30 min per week of walking for transport, those who reported greater than 2.5 hours per week had lower odds of risk of depression. No associations were evident between domestic PA and odds of risk of depression in either unadjusted or adjusted models.
---
Social context of PA
Before and after adjusting for covariates, results showed that compared with those women who reported doing all leisure-time PA on their own, those who reported doing about three-fourth (most) leisure-time PA alone (i.e. about a quarter with others) had lower odds of risk of depression. However, this was the only category of social context to reach statistical significance.
---
Sedentary behavior
Both the unadjusted and adjusted results showed that compared with women in the lowest tertile (reporting less than 4.75 hours) of computer time per week, those in the highest tertile (reporting greater than 21.5 hours) had higher odds of risk of depression. TV viewing was not significantly associated with risk of depression in the adjusted model. Associations between women in the middle tertile (23.5-46.3 hours) of total screen time and risk of depression were not significant in the adjusted model. However, both the unadjusted and adjusted results showed that compared with women who reported less than 23.5 hours of total screen time per week, those who reported more than 46.3 hours per week had higher odds of risk of depression.
Unadjusted and adjusted results indicated that compared with women in the lowest tertile (reporting less than 30.7 hours) of sitting time per week, those in the highest tertile (reporting more than 54.5 hours) had higher odds of risk of depression.
Although there were significant main effects between mid and high amounts of leisure-time PA and risk of depression, there were no interactions between leisure-time PA, SB (sitting time) and risk of depression in either the unadjusted or the adjusted models.
---
Discussion
The current study provides novel findings regarding the domain and social context of PA as well as PA and depression the SBs associated with risk of depression in women from socioeconomically disadvantaged neighborhoods.
Results showed that women who reported participating in greater amounts of leisure-time PA (greater than 40 min per week) were less likely to be at risk of depression than those who reported undertaking less than this. Further, results indicated an inverse relationship with risk of depression when examining the duration of leisure-time PA undertaken in each intensity (i.e. walking, moderate and vigorous). These findings suggest that greater doses of leisure-time PA may reduce the risk of depression, or alternatively people experiencing depressive symptoms spend less time in leisure-time PA, consistent with findings from previous studies [11].
The present study also found that undertaking a high dose of transport-related PA (e.g. greater than 2.5 hours) was associated with lower risk of depressive symptoms compared with those who reported lower doses. However, when examined according to each transport-related activity separately (cycling and walking), only high doses of transport-related walking was associated with a reduced likelihood of depression, suggesting that it may be the type of PA used for transport that is important. This finding is in contrast to previous studies that specifically examined and found no association between transport-related PA and risk of depression [11,14]. However, the sample sizes of both previous studies were much smaller than that of the current study, perhaps reducing the power to detect smaller associations.
Consistent with previous studies [11], no association was found between any intensity of workrelated or domestic PA and risk of depression in this study. This finding suggests that it may be the type/mode of PA, rather than the dose (i.e. intensity and duration) that is most important in determining the relationship with risk of depressive symptoms. These findings may be due to women's lack of enjoyment or control when participating in workrelated and domestic PA.
A number of physiological hypotheses have been suggested to explain the inverse association between PA and depression including the 'endorphin hypothesis', which suggests that PA produces endorphin secretion, which in turn reduces pain and produces feelings of euphoria [38]. However, the production of endorphins requires a high exercise intensity [39]. Since walking for leisure was inversely associated with risk of depression in the current study, other hypotheses such as the serotonin hypothesis [40] may be more applicable. The serotonin hypothesis suggests that exercise may reduce depression by increasing the synthesis of serotonin [41,42], a neurotransmitter found in the brain that regulates mood and stress [43]. Furthermore, spending time outdoors (in natural light) may provide additional mental health benefits when undertaking PA as exposure to light has been found to increase serotonin synthesis [44]. Nonphysiological hypotheses may also play a role in explaining the inverse association between PA and depression. These relate to distraction effects by which improvements in mental well-being following exercise may be due to the diversion of negative thoughts during the activity [45]. Alternatively, improvements following PA may be the sense of mastery and success derived from achieving goals [45].
The social context of PA was found to be associated with risk of depression in the current study, although the association was not linear and only held for those women reporting undertaking onequarter of their leisure-time PA with someone else. The finding that reduced risk of depression was associated with doing about one-quarter of PA with someone else, but not 50% or more, is not easily explained. This may have been related to the particular categories of PA analyzed. However, in further investigation of this association, we recategorized social context as: all leisure-time PA done on own (reference category), more than half (but not all) leisure-time PA done on own and less than half leisure-time PA done on own. Yet, results showed no significant associations between either one of those categories and risk of depression (data not shown). Further investigation of this non-linear association is required.
The current study suggested that additional mental health benefits may come from undertaking some leisure-time PA with someone else, yet not all PA with others may be associated with a lower risk of depression. This is consistent with findings from the only other cross-sectional study that has examined the association between the social context of PA and risk of depression [11]. That study found that being active with a family member was associated with a lower risk of depression, yet being active with a friend was not. Conversely, our findings may also suggest that perhaps women at risk of depression prefer to participate in PA by themselves as social withdrawal is a symptom of minor depression [46]. Since social support is widely known to be linked to lower levels of depression [47], the social context of PA may be an important component in the relationship between PA and depression.
The current study found that undertaking greater doses of computer use, screen and overall sitting time were associated with an increased risk of depressive symptoms. This is consistent with several studies that assessed SB in terms of computer/Internet use [48] and overall sitting time [21], suggesting that greater doses of SB increase the risk of depression or alternatively people experiencing depressive symptoms spend greater amounts of time in SBs.
The findings of this study indicated no interaction between SB, PA and risk of depression. In other words, contrary to expectancies, and to the findings of one existing study [3], positive associations between SB and risk of depression were not altered by participants' leisure-time PA levels.
In the previous study [3], depression was not reported exclusively as the outcome measure [e.g. the outcome measure (mental disorder) also included stress and anxiety] and a longitudinal design was used, which may account for the differences in results. However, similar to our findings, studies investigating physical health and disease PA and depression parameters have found the relationship between SB and physical health conditions such as obesity, metabolic syndrome and type 2 diabetes to be independent of PA [49][50][51]. Therefore, assessing the joint SB-PA-depression relationship may be an important point of consideration and area for further research.
One major limitation of the current study is its cross-sectional design, which does not allow for causality or the direction of relationships to be determined. A second limitation is that self-report measures were used to assess PA, SB and risk of depression; however, all measures were well validated [30,33]. Future studies could utilize objective measures such as accelerometers for assessing PA and SB. Finally, women with missing data on any covariates (e.g. education, income and BMI) were excluded from regression analyses in the present study. Chi-square analyses showed that a significantly greater proportion of women excluded for this reason were at risk of depression when compared with those who were included. Therefore, a disproportionately higher number of women at risk of depression may have been missed in analyses.
A major strength of this study is the large, population-based sample of women living in socioeconomically disadvantaged neighborhoods, which provided good power to detect associations, even after controlling for a range of important covariates. Few studies have examined the association between domain and social context of PA and risk of depression or between SB and risk of depression, particularly in women [10]. Furthermore, only one previous study has assessed the interaction between SB, PA and risk of depression [3]. This study extends this evidence to socioeconomically disadvantaged women who are a population group at a great risk of physical inactivity [8,9] and depression [7].
---
Conclusion
Given that depression is the world's most incapacitating illness [52], strategies to prevent and manage depression are increasingly important.
Recognizing the cross-sectional nature of the current study, these findings suggest that promoting PA, particularly for leisure and transport, could be an important aspect in preventing depression. Furthermore, mental health guidelines may be developed to include some aspect of social/accompanied leisure-time PA for additional mental health benefits. Guidelines should also recommend reducing time spent in SBs (e.g. sitting time and computer use) in order to further reduce risk of depression. However, confirmation of these findings using prospective and intervention study designs is required.
---
Conflict of interest statement
None declared. |
Background: Breast/Chestfeeding remains a public health issue for African Americans, and increased rates would mitigate many health disparities, thus promoting health equity. Research Aims: To explore the interplay of generational familial roles and meaning (or value) ascribed to communicating infant feeding information across three generations. Method: This prospective, cross-sectional qualitative study used an asset-driven approach and was guided by Black Feminist Thought and Symbolic Interactionism. African American women (N = 35; 15 family triads/dyads), residing in the southeastern United States were interviewed. Data were analyzed using thematic analysis. Results: The older two generations described their role using assertive yet nurturing terms, while the younger generation carefully discussed the flexibility between their familial roles. Emergent themes described the meaning each generation attributed to communicating infant feeding information: "My Responsibility," "Comforting," "Bonding Experience," "She Cared," and "Gained Wisdom." Conclusions: Our findings have potential to contribute to achieving health equity in African American families. Future breast/ chestfeeding promotion efforts may benefit from reframing the current approach to including protection language and not solely support language. Lactation professionals should further recognize and support strengths and resource-richness of intergenerational infant feeding communication within African American families using strength-based, empowerment-oriented, and ethnically sensitive approaches. |
public health issue for African Americans, and increasing their rates is essential to eliminating these health disparities, and thus promoting health equity (Anstey et al., 2017).
Health communication strategies are used in clinical settings to address health disparities (Hovick et al., 2015). However, because of the history of discrimination and medical mistrust, African Americans tend to rely on their extended kinship networks for health-related information (Pullen et al., 2015). Often, health communication occurs in African American families through oral histories, storytelling, and narratives rooted in African American culture (Fabius, 2016). These informal methods of passing information from generation to generation have many purposes, including an emancipatory function to counter dominant ideologies, and to teach younger generations about the resilience and perseverance unique to the African American experience (Fabius, 2016). This experience dates back to the historical narratives from chattel slavery that include anti-Black racism, involuntary breeding, sexual assault, wet-nursing, child abduction, forced sterilization, and maternal vilification, and the effects of this legacy are still ongoing (Fabius, 2016;Gatison, 2017;Roberts, 1997;West & Knight, 2017). "Stories and rituals are symbolic links to the past, performed in the present. Thus, they may be regarded as a means for understanding family communication as an oral tradition" (Jorgenson & Bochner, 2004, p. 518). Family infant feeding communication, and the quality and availability of social support influences breastfeeding outcomes for African Americans (DeVane- Johnson et al., 2017).
In African American families, elders are entrusted keepers of communal knowledge, and hailed as the wisest, most respected members of the family (McLoyd et al., 2019). They play an important role in preserving their cultural beliefs and family values (including family reciprocity, sense of duty, and group survival). Other aspects of their role are passing down communication values and ideals, which are foundational for intergenerational support in their flexible family system (McLoyd et al., 2019). Therefore, African American mothers tend to consult their own mother and maternal grandmother for parenting guidance and advice rather than healthcare providers (Grassley et al., 2012). Grandmothers (an infant's grandmother and great-grandmother) play a critical role in infant feeding decisions and may act as a postpartum breastfeeding advocate (Grassley & Eschiti, 2011). Furthermore, researchers have postulated that since many grandmothers in the United States may lack breastfeeding knowledge and experience, this may influence the support and advice they provide to new birthing parents (Grassley et al., 2012). In fact, a grandmother's lack of understanding (or misunderstanding) of current breastfeeding recommendations may influence a parent's breastfeeding selfefficacy, supply, and overall success (Grassley et al., 2012). However, strength-based literature has suggested that although family feeding history may influence the expression of social support, positive social support exists within African American families (Woods Barr et al., in press).
Given that African Americans experience breastfeeding disparities and health disparities, understanding the key sociocultural contexts in which infant feeding is communicated is important. An understanding of the meaning African Americans give to shared infant feeding information within their family is necessary (Peritore, 2016). African Americans are often bombarded with messages, images, and stereotypes of "good" motherhood from multiple channels; however, these messages may conflict with the complex relationships African Americans have with their bodies, families, and communities, as a result of their historically negative reproductive experiences in America (Johnson et al., 2015). The authors posit that the sociallyconstructed meanings of infant feeding information, passed down from generation to generation by a mother's own mother and/or maternal grandmother, helps to shape feeding practices among the younger generation. To better understand the meaning of sharing infant feeding information, it is important to consider how each generation defines and navigates their familial roles. These are missing components in the comprehensive study of family health and infant feeding communication. Therefore, the aim of this study was to explore the interplay of generational familial roles and meaning (value) ascribed to communicating infant feeding information across three generations of African American women.
---
Theoretical Frameworks
Black Feminist Thought (BFT) and Symbolic Interactionism informed and guided this research. BFT provides a lens to view the intersectional experiences of Black women and the ways in which they interact with society (Collins, 2000). As a theoretical framework, BFT strives to change the narrative of Black women, highlight their compounding forms of oppression, and express the value of culture in their lives (Collins, 2009). Collins (2009) supported the idea that self-definition permits
---
Key Messages
• We explored the interplay of generational familial roles and meaning (or value) that African American female family members ascribe to sharing infant feeding information across three generations.
• Concerning the meaning of shared infant feed- ing information, older generations described their moral responsibility, the middle generation expressed comfort and bonding, and the younger generation reported their trust in older generations.
• Lactation professionals should recognize the value of multigenerational oral traditions and consider including protection language in addition to support language when including elders into infant feeding conversations.
Black women to rid themselves of the negative images and assumptions created by white society, as an act of empowerment that counteracts marginalization. Symbolic Interactionism was chosen alongside BFT because Symbolic Interactionism is a communication theory of human behavior (Faules & Alexander, 1978). Symbolic Interactionism provided a framework for making meaning of lived experiences from the actor's viewpoint. Meaning is one of the core essentials for understanding human behavior, interactions, and social processes. Symbolic interactionists have suggested that to fully understand a person's social processes, one needs to understand the meanings that an individual places on experiences within a specific context (Chenitz & Swanson, 1986;Morris, 1977). Both theories emphasize a person's lived experience, which includes their internal human behavior, the concept of meaning perceived by them, and understanding context from their perspective (Jeon, 2004).
---
Method
---
Design
Based on the gaps identified, this study adopted a prospective, cross-sectional qualitative research design using an assetdriven approach, centering African American women's voices and lived experiences (Brown, 2012). Compared to other research methods, qualitative research is unique because it allows the researcher to capture narratives, feelings, and thoughts. This study was approved by the University of South Florida Institutional Review Board.
---
Setting
Women living in the Southeastern region of the United States tend to breastfeed less often than women living in other regions, regardless of sociodemographic characteristics (Anstey et al., 2017). Additionally, women living in this region (particularly Black women) tend to experience disproportionately high rates of cesarean sections (many of which are medically unnecessary) (Centers for Disease Control and Prevention, 2018). In addition to several short-term and long-term health risks for mothers and their infants, cesarean sections are associated with lowered breastfeeding rates (Chen et al., 2018).
---
Sample
A sample of African American women (N = 35; 15 family triads/dyads) were recruited using purposive and snowball sampling (Patton, 2002). Family triads included the youngest adult generation (G3), her mother/mother figure (G2), and her maternal grandmother/grandmother figure (G1). Family dyads consisted of G3 and G2. Grandmother and mother figures (or other mothers) consisted of aunts, sisters, cousins, and stepmothers who were responsible for raising G3 (Collins, 2005). All participants were adult women who self-identified as African American, Black, Colored, or Negro, were a part of a family where at least two generations were willing to participate in the study, and at least one generation in each family resided in the southeastern United States. Additionally, the youngest adult generation needed to have had at least one child that they breastfed for 3 months or more, and the child was 5 years or younger at the time of the study. Families were excluded if at least one woman in the dyad/triad reported not being in active communication with the other woman/women in the family. Participants were recruited until a sample size adequate for qualitative research and thematic saturation was reached (Morse, 1994).
---
Data Collection
From February-March 2019, in-person and telephone interviews were conducted with African American women living in the southeastern United States. The first author (A. W. B.) is an African American female doctoral trained researcher who was the project leader and sole data collector for this project. Her identity mediated access to the study sample and the depth of information that each participant shared with her. Throughout the data, participants used words and phrases like "we," "us," and "you know," reflecting the race concordance between the first author and participants. This is a methodological strength of the study.
A.W. B. obtained informed consent immediately before conducting the interview. For both in-person and telephone interviews, participants had the opportunity to ask questions about the informed consent prior to beginning the interview. In each instance, interviews were audio recorded. All recorded and written data were kept confidential. Measures were taken to protect the storage of research-related records on a secure research server to which only A. W. B. had access.
A.W. B. developed and pilot tested two interview guides: one for older generations (G1s/G2s), and one for the youngest generation (G3s). She used loosely structured interviews to engage with participants using a small list of core questions and probes (ensuring similar data were collected from each participant), while also allowing them to tell their stories in their own way (Davis & Craven, 2016). Considering the sensitivity of discussed topics, A. W. B. interviewed participants in a comfortable and convenient environment, allowing them to talk freely and in detail (Davis & Craven, 2016). Interviews lasted 20-90 minutes. Each participant was offered a $20 gift card for a local retailer.
---
Data Analysis
Participant characteristics were reported using descriptive statistics. Audio recordings were transcribed verbatim and data were de-identified using pseudonyms. After being reviewed for accuracy, transcribed interviews and field notes were imported into MAXQDA software (Version 18.2.0; VERBI GmbH, Berlin, Germany) for data management and analysis. To add reliability and reduce risk of researcher bias, A. W. B. included a Black PhD candidate trained in qualitative research to serve as second reader and coder. A. W. B. used thematic analysis to deductively analyze transcripts that combined inductive coding (to identify emergent themes) and created thematic maps (to group themes; Cormack et al., 2018). Trustworthiness was achieved using the following techniques: Pilot testing the interview guides, keeping detailed field notes, peer debriefing, maintaining a research reflexivity journal, member checking, utilizing multiple data coders and clarification of bias to ensure accuracy (Creswell & Miller, 2000).
---
Results
---
Participant Characteristics
Fifteen African American family dyads/triads (n = 5 G1s, n = 15 G2s and n = 15 G3s) were interviewed. Nine families were dyads and six were triads. All participants ranged from 24-80 years, the majority (57%) were married, and all had a high school diploma or higher. G1s' mean age (range) was 71.6 (64-80 years) and parity was 3.2 live births. G2s' mean age (range) and parity was 53.6 (36-67 years) and 3.6 live births, respectively. G3s' mean age (range) and parity was 30.6 (24-34 years) 1.9 live births, respectively. See Table 1 for additional participant characteristics.
---
Contextualizing African American Families
Self-Defined Role in Family. Exploring the perceived role each generation played in their family provided contextual information to understand the meaning they ascribed to sharing infant feeding information within their family. Each participant discussed their familial role, and Figure 1 displays a word cloud of each generation's responses. The larger the word or phrase, the more often participants stated it. G1s described their role as "Head," "Mother," "Grandmother," "Great grandmother," and "Advisor." G2s described their role as "Keeps family together," "Mother," "Counselor," and "Communicator." Finally, G3s were careful to clarify to what context they were referring. They would often say in their immediate family, they were "Head," "Organizer," and "Provider," but, in their extended family, they were the "Student," "Learner," and "Baby of the family." Each word cloud became more intricate and complex with each subsequent generation.
---
Meaning Attributed to Communicating Infant Feeding Information
Each generation was prompted to reflect on their family communication regarding infant feeding. The meaning they attributed to communicating infant feeding information was expressed across the following themes: My Responsibility, Bonding Experience, Comforting, She Cared and Gained Wisdom. Themes are defined and described below and in Table 2.
Theme: My Responsibility. My Responsibility denoted the conviction and duty G1s/G2s reported regarding sharing infant feeding information with G3s. Overall, they believed elders were responsible for passing knowledge and values down to G3s. Louise discussed the importance of elders teaching younger women about infant feeding, motherhood, and womanhood:
It was very important to share information with [my granddaughter], because the scriptures say that the older women are to teach the younger women how to love their husbands, how to be chaste housewives and how to raise their children.
And in today's society, this is what we are missing. We have information for you to bring you to another level. And if we are not teaching, then our generation line is missing a lot of stuff.
In general, G1s/G2s discussed the joy they experienced from sharing their knowledge. Vivian explained:
It's good to share your knowledge with somebody you care about. Whether they take it or not, it still makes you feel good to share it. And when you find out that they have taken your advice, you really feel good…'cause you feel like you are here for a purpose to teach or to share.… I think it's good to pass the information that you done gathered in your life on to the younger people. Because the information is the same. It might be done a different way, but you put the idea in their head of how to do this, that, and the other. And they don't necessarily have to do it the same way you do it, but you done gave them the idea and the knowledge that this can be done this way. 'Cause I believe in not making stuff harder for yourself.
Theme: Bonding Experience. Bonding Experience referred to the closeness G2s described having with G3s regarding their infant feeding discussions. Betty said, It just kind of deepened our relationship because it was something that I had experienced as a mom and was able to pass on to her. So, it's something that we could talk about… we could laugh about. I mean because we have something else in our little history that we can talk about.
Additionally, Roxanne expressed, "It meant the world to me because I love my daughter and my grandkids. Teaching her about being a mom and feeding her babies brought us closer to each other." G2s enjoyed the idea of being able to bond with G3s over a topic that was as intimate as feeding children. Additionally, Yolanda said, "It was really comforting to know that I had some kind of experience and I could share some things that would make her life a little bit easier… some things that had passed down from my mom."
Theme: She Cared. She Cared referred to the trust, confidence, and belief G3s described regarding the infant feeding discussions they shared with G1s/G2s. This theme encompassed G3s' sentiments that G1s/G2s cared for them because they took time to share infant feeding information with them. G3s described three main reasons why they perceived older generations cared: (1) G1s/G2s would not intentionally tell them anything wrong; (2) G1s/G2s gave a personal touch while sharing infant feeding information; and (3) G1s/ G2s only wanted the best for them and their children. Asia recalled valuing her mother's advice:
Being young and never experiencing [breastfeeding], I just kinda said, well my momma knows best, and I just went with what she said would be the best for my daughter. It meant a lot because I was kind of going into the situation blind and young and inexperienced. So, you know I kinda felt like my momma had my back and she wouldn't steer me wrong.
Additionally, Dominique recalled the hospital being very clinical, but her mother gave something more:
It really meant a lot to have somebody that cares…not that nurses don't care…some of them do, but some of them will
---
My Responsibility
The conviction and duty G1s and G2s reported about sharing infant feeding information with G3s.
"We supposed to talk to them…It's important to talk to the young people, to women and everything. But it's like when you say it, don't go in there like a know-it-all. But kind of make them feel comfortable. See it from both sides of the fence." (Martha, G1)
G1s/G2s
Bonding Experience The closeness and attachment that G2s described having with G3s during their infant feeding discussions.
"It felt good, you know because we were like bonding, you know. Over something different, you know. So, it felt good. (Sharon, G2)
---
G2s
Comforting
The tranquility and calmness that G2s experienced because of infant feeding discussions with G3s.
"It was really comforting to know that I had some kind of experience and I could share some things that would make her life a little bit easier. Some things that had passed down from my mom." (Yolanda, G2)
---
G2s
She Cared The trust, confidence, and belief that G2s and G1s cared when having infant feeding discussions with G3s. For G3s that were first-generation breastfeeders (first in their family to breastfeed), they shared that even though older generations may have initially been critical of their breastfeeding decision, they understood their response came from a caring place. Brianna said: I think when my [mom and grandma] don't understand something, they…shun it away. Or they say that that's something that you shouldn't do. But I know they are only doing and saying that because they only want to see the best for me…I know they still care, they just don't know how to show that support that's needed.
Theme: Gained Wisdom. Gained Wisdom revealed the value G3s placed on the wisdom expressed through infant feeding discussions with G1/G2s. G3s gave two main reasons why they valued the wisdom they gained: (1) G1s/G2s had experienced motherhood before and (2) the wisdom from their elders is priceless. Kimberly listened to her mother's instructions because of her prior motherhood experience: I felt she knew what was going to be best. She's been down this road before. I feel like why not listen to her.…And she was like, "Okay, you're going to breastfeed," though I had not made up my mind. So, I'm thinking in my head she told me I'm gonna breastfeed. Maybe this is what I need to do… .I'll look more into it.
Amber discussed the importance of recognizing and honoring the wisdom that elders contribute, some of which was nonverbal, Well, it was more so like with your mother, or your grandmother, or your aunt, it's not always…a real conversation. It's more like they do this to your baby…you just go with it. You don't tell them "no." And I think it's funny 'cause I think as Black women, the more educated we get, the more we get away from letting our elders do what they know to do. 'Cause obviously what they know has worked. So, it was more like they tell you, "This is what you need to do. You need to try this. You need to try that." Oh Okay. Yeah, I'm on it. I'll do that.
LaKisha lauded her grandmother for the wisdom she shared: "It really meant a lot because you can't, in this day and age, pay for that wisdom...I feel I was very blessed to have [my grandmother]." G3s generally described trust and acceptance toward the infant feeding information shared within their family.
---
Discussion
Our findings may be used to strengthen understanding of the interplay of intergenerational familial roles and meaning ascribed to communicating infant feeding information across three generations of African American women. We have added to the literature that using family-centered approaches to breastfeeding promotion and support may be beneficial for African American families, as this racial group tends to be collectivistic (kinship-centered) rather than individualistic (Steers et al., 2019). These findings have extensive implications for clinicians, educators and scholars who work with African American families and have the potential to contribute to achieving health equity in this community. Novel findings were: (1) Older generations (G1s/G2s) described a moral responsibility to communicate information with younger generations, which includes topics of infant feeding and beyond; (2) G2s described comfort and a strong bond from communicating infant feeding information with the younger generation (G3s); and (3) younger generations described trust and acceptance of the infant feeding information and wisdom they received from older generations.
G1s/G2s discussed the symbolic meaning of moral responsibility related to passing on knowledge and wisdom to the next generation, which is a consistent theme throughout African American history (Bronner, 1998;Hecht et al., 2002;Osei-Boadu, 1990). African American women and mothers have long used teachings as a form of protection for their children, as an act of maternal love, and as a central principle of their motherwork (Collins, 2005;McLoyd et al., 2019). Our findings also align with generativity concepts, which include concern and need to nurture and guide younger generations (Ashida & Schafer, 2015;Fabius, 2016). Older generations tended to define their familial role as "head," "advisor," "communicator," and "counselor," which may help to explain their conviction to share infant feeding information with G3s. Therefore, those who work with African American families should consider integrating concepts of generativity to strengthen and enrich their breastfeeding support efforts. Leveraging the social influence from older generations and including them in infant feeding conversations at prenatal or well-baby visits, and in educational programs, would honor the role of older generations in African American families and provide cross-generational influence.
In addition to responsibility, G2s reported bonding over something new in their mother-daughter relationship, as well as feeling comfort knowing they shared information that G3s could use. They described serenity in knowing that G3s gained knowledge and wisdom about motherhood, womanhood, and other aspects of life. This finding contributes to further understanding the dynamics of familial roles among African Americans. Bonding has been associated with trust, and positively affects overall self-esteem in African Americans (Causey et al., 2015). As mentioned earlier, mothering while Black requires constant concern for protection that includes various socialization strategies (Malone Gonzalez, 2020 ). Future breast/chestfeeding promotion efforts may benefit from reframing our current approach to including protection language and not solely support language. Proper messaging is the crux of breastfeeding promotion, support, and protection. For example, we could educate older generations about the importance of encouraging breastfeeding, which may increase the chances that information funnels down to the younger generation; thereby acting as another method of protection. To effectively bring older generations into infant feeding conversations, lactation professionals must first recognize, honor, and respect the grandmother role, and understand the value each generation places on shared infant feeding information within African American families.
G3s indicated a high level of reverence for G1s/G2s, which is a cultural tradition placing value on respecting and obeying elders (McLoyd et al., 2019). G3s demonstrated this reverence in the nuanced way they defined their familial roles-being the head of their household, while also recognizing that they were students and learners in their extended families. G3s generally described trust and acceptance of the infant feeding information shared by G1s/G2s. Considering that African Americans experience some level of medical mistrust (Jaiswal, 2019), understandably G3s found G1s/G2s to be a trusting source of information. G3s described the wisdom they gained from G1s/G2s, and that G1s/G2s cared because of their willingness to share their infant feeding knowledge and stories. Feeling cared for contributes to a mother's sense of overall well-being (Miller & Wilkes, 2015). African American communities disproportionately report experiencing substandard maternity care and do not feel cared for by the medical community (Robinson et al., 2019). Frankly, in the United States, African American women are three to four times more likely to die from pregnancy or childbirth-related reasons (Centers for Disease Control and Prevention, 2017) because of interlocking systems of oppression, including the lack of value placed on their lives within the U.S. healthcare system. Additionally, various researchers have demonstrated that healthcare providers offer breastfeeding advice, education, and support less often to African Americans than other racial/ethnic groups (Asiodu et al., 2017;Davis, 2019;Johnson et al., 2016). Providers should recognize that this generation associates receiving infant feeding information with feelings of care and concern and provide them with equitable breastfeeding education.
Interventions aimed to increase informal breastfeeding support are likely to increase breastfeeding rates (DeVane- Johnson et al., 2018). Since African Americans tend to identify with collectivism, there are direct implications for public health programs and interventions. The following should be considered during the design phase: (1) Reverence for the role and authority of elders, and cultural traditions are foundational values (McLoyd et al., 2019); (2) elders are vital in conveying information to younger generations (McLoyd et al., 2019); (3) multigenerational and extended families influence beliefs of individuals within the family (Fabius, 2016); (4) intergenerational interactions and communication are key (Fabius, 2016); and (5) the community-based participatory research model is an important element for successful interventions in addressing minority health disparities (National Academies of Sciences, Engineering, and Medicine, 2017).
---
Limitations
In qualitative research, participants and researchers engage directly, which not only encourages prolific discussion and thick descriptions, but also can increase the possibility of researcher bias. Additionally, social desirability bias may have influenced participants' responses. While every effort was made to develop rapport with the participants and to elicit accurate responses, A. W. B. asked intimate questions about their familial relationship status and the meaning ascribed to sharing infant feeding information, and participants may have felt uncomfortable answering accurately.
---
Conclusion
This novel study provides unique perspectives to existing infant feeding literature as few researchers have examined the interplay of self-defined familial roles and how three generations ascribe meaning (value) to shared infant feeding information within African American families. Our findings suggested potentially unexpected pathways to increasing health equity through recognizing and supporting the strengths and resourcerichness of intergenerational infant feeding communication within African American families using strength-based, empowerment-oriented, and ethnically sensitive approaches. The meaning examined may provide a framework for further exploration of grandmothers' roles in breast/chestfeeding support, and the specific contexts under which this may occur. Providing equitable care to African American families means respecting each generation, gauging their feeding attitudes, meeting them where they are, and listening to them.
Barr during her doctoral program and continue to mentor her. Dr. Woods Barr serves as a mentor to Jacquana Smith. Authors report no conflict of interest.
---
Disclosures and conflicts of interest
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Drs. Austin and Schafer served as mentors to Dr. Woods |
The COVID-19 pandemic that has hit the world in the year 2020 has put a strain on our ability to cope with events and revolutionized our daily habits. On 9 March, Italy was forced to lockdown to prevent the spread of the infection, with measures including the mandatory closure of schools and nonessential activities, travel restrictions, and the obligation to spend entire weeks in the same physical space. The aim of this study was to assess the impact of the COVID-19 pandemic and lockdown measures on quality of life (QoL) in a large Italian sample, in order to investigate possible differences in QoL levels related to both demographic and pandemic-specific variables. A total of 2251 Italian adults (1665 women, mainly young and middle adults) were recruited via a snowball sampling strategy. Participants were requested to answer to an online survey, which included demographic and COVID-related information items, and the World Health Organization Quality of Life BREF questionnaire (WHOQOL-BREF). The results showed statistically significant differences in QoL depending on a number of variables, including sex, area of residence in Italy, and being diagnosed with a medical/psychiatric condition. To our knowledge, this is the first study to assess QoL during COVID-19 pandemic in Italy, therefore the present findings can offer guidelines regarding which social groups are more vulnerable of a decline in QoL and would benefit of psychological interventions. | Introduction
During pandemics, the population's psychological responses to infection play an important role in both the spreading and containment of the disease, influencing the extent to which psychological distress and social disorder occur [1]. This may be partly explained by those emotional states that frequently mark pandemics, such as uncertainty, confusion, and sense of urgency [2]. In the early stages of a pandemic, feelings of uncertainty prevail, due to the fear of becoming infected and not having the right information about the best methods of prevention and management [3][4][5]. Furthermore, pandemics are associated with various psychosocial stressors, including health threats to oneself and loved ones; significant changes in daily routine, such as restriction in the physical activity behavior (PA) [6][7][8]; separation from family and friends; shortages of food and medicine; wage loss; social isolation due to quarantine or other social distancing measures; and school closures [9]. Serious economic difficulties can also occur if a family's primary wage earner is unable to work due to illness [1].
For these reasons, the effects of the current COVID-19 pandemic would be more pronounced, more widespread, and longer-lasting than the purely somatic effects of infection, with serious impairment on peoples' actual and perceived quality of life (QoL). The COVID-19 pandemic that has hit the world in the last 12 months has indeed put a strain on our ability to cope with events and revolutionized our daily habits. In Italy, a state of emergency was declared by the Italian government on 31 January 2020 [10], when two Chinese tourists in Rome tested positive for the SARS-CoV-2. The first case in Italy was recorded in February 2020, and the epidemic rapidly spread, reaching 220 infections on 24 February [11]. The government responded by implementing prevention measures and infection control on 11 March, when the number of infections reached 12,462 and the total deaths were 827. Despite the fact that the infection spread differently between the northern and southern regions of Italy, the increasingly restrictive containment measures led to a total lockdown throughout the country (11 March-3 May 2020). Lockdown measures included the mandatory closure of schools and nonessential commercial activities and industries, in addition to travel restrictions both inside and outside the country. After 3 May, the number of infections dropped below 1221 new cases and many restrictions were gradually eased [12]. On 3 June, freedom of movement across regions and European countries was restored and other nonessential activities reopened.
Most of the early studies on the psychological impact of COVID-19, published at the beginning of the pandemic, have compared the current situation with the SARS epidemic in 2003 [13][14][15][16]. These studies highlighted the risk for people with suspected or certain infections to experience uncontrolled fear over a long period, not only in relation to the disease but also to the condition of quarantine. During the previous SARS epidemic, a peak of incidence of many psychiatric disorders, such as depression, anxiety, panic attacks, psychomotorial agitation, and suicide, had been reported. Kwek and colleagues [17] brought out the long-term consequences of the pandemic on health and claimed that SARS impaired significantly both QoL and mental functioning at three months from the acute episode. A small number of additional studies conducted during a previous pandemic also showed the consequences of the pandemic on psychological well-being of infected people, highlighting various factors associated with greater psychological distress, including sociodemographic variables, such as being a woman and middle aged adult or having a lower level of education [3,5]. Moreover, the majority of the studies recently reviewed by Brooks and co-workers [18] reported on the negative psychological effects of quarantine, including symptoms of post-traumatic stress, confusion, and anger. Examples of relevant stressors were a long quarantine period, fear of infection, frustration, boredom, inadequate supplies of personal security systems, inadequate information, financial losses, and social stigma.
This evidence has been further supported by an increasing number of publications on mental health demonstrating higher levels of psychological distress among the population during COVID-19 pandemic [19][20][21][22]. For instance, a large Italian study by Rossi and colleagues [19] showed an increase in anxiety and depressive symptoms for people who had lived four weeks of lockdown, and found 37% of the sample with post-traumatic stress symptoms, whereby female gender and younger age were risk factors for worse mental health.
However, while the attention on the consequences of COVID-19 over mental health has been increasing, there is a limited number of international studies on its effects over QoL. Among already published studies, Pieh and co-workers [23] found an average psychological score of the World Health Organization Quality of Life BREF (WHOQOL-BREF) questionnaire significantly lower compared to a study published in 2015 [24]; the study also reported lower scores for younger adults, women, individuals without work, and those with low income. Horesh, Kapel Lev-Ari, and Hasson-Ohayon [25] also reported higher stress levels and lower QoL for women, younger participants, and for people with pre-existing chronic illness. However, to our knowledge, there have been no studies investigating QoL in Italian populations during the COVID-19 pandemic [23,[25][26][27][28].
In addition to sociodemographic variables, it has been suggested that other factors might influence QoL during pandemics, such as the difficulty in accessing healthcare services [26,27] and social isolation [29]. Van Ballegooijen and colleagues [27] described considerable levels of stress, a lower QoL, and concerns about access to healthcare during the first eight weeks of the COVID-19 lockdown in the Netherlands and Belgium. With respect to the difficulty in accessing healthcare, a Chinese study showed that the relevant index of QoL decreased with increasing age, due to the presence of chronic diseases in this segment of the population [26]. Regarding social isolation, a British study reported lower levels of wellbeing and QoL for people who felt more isolated than usual during lockdown, whereas the level of perceived social support showed significant positive correlations with QoL [29]. Another study from a Chinese sample showed relatively lower levels of physical and psychological domains of QoL but, interestingly, not in the social and environmental domains [28].
These studies highlight that the pandemic situation, including the measures put in place to contain it, involves various aspects of life and health. Monitoring the state of health requires the measurement of indicators capable of grasping the many subjective and functional dimensions of well-being and QoL. Particularly, the assessment of QoL is increasingly often considered as an integral part of any intervention that aims to promote health and wellness. QoL is actually viewed as an overall and multidimensional indicator of general wellbeing. Indeed, the WHO defines QoL as "an individual's perception of their position in life in the context of the culture and value systems in which they live, and in relation to their goals, expectations, standards and concern" (p. 1405) [30]. In measuring QoL, the WHOQOL group takes the subjective dimension strongly into account [31]. The ability to feel a certain well-being, regardless of living conditions, is a subjective variable directly related to other dimensions: genetic variables, personality, and life events. It is a set of factors dynamically interacting with each other in a different way through the life span and across different cultures. QoL is not a simple and linear entity, it is indeed a complex, multidimensional construct that, according to the WHO, includes six domains: physical, psychological, social, level of independence, environment, and spirituality/religions/personal beliefs.
The present study aimed to explore the impact that both the COVID-19 emergency and the resulting restrictive measures had on the perception of QoL among Italian general adult population. Additionally, this study aimed to investigate possible differences in QoL depending on sociodemographic variables, such as sex, age, marital status, occupational status, level of education, and area of residence in Italy, as well as specific factors related to the COVID-19 outbreak (e.g., changes in employment status and location, family members or friends infected with Sars-Cov-2, adherence to control and precautions measures, household size during COVID-19 outbreak). Particular reference will be given to the physical, psychological, social, and environmental domains of QoL as measured by the WHOQOL-BREF.
---
Materials and Methods
---
Procedure
An online cross-sectional survey was performed with Qualtrics ® (Qualtrics, Provo, UT, USA) Survey Platform. Such a data collection strategy was chosen as it allowed us to reach as many voluntary participants as possible in a phase of forced social distancing. The survey started after 7 weeks of quarantine in Italy (25 April 2020) and was performed for about 6 weeks, until the end of lockdown measures (2 June 2020). This measurement point was selected because significant changes in individuals' QoL need some time to be perceived by the person. Moreover, this timeframe potentially allowed the population to adjust to the new situation. The sample was recruited via a snowball sampling strategy. A link to Qualtrics questionnaires were sent via e-mail, social networks (Facebook and WhatsApp), and official working platforms (website of the University of Palermo, Italy). The link was shared with personal contacts of the research group members, who in turn passed the survey to their friends and acquaintances. A brief presentation informed the participants about the aims of the study and electronic informed consent, assuring maximum confidentiality in the handling and analysis of the responses, was requested from each participant before starting the investigation. The survey took approximately 30 min to complete. Participation was voluntary and free of charge. To guarantee anonymity, no personal data, which could allow the identification of participants, were collected. Participants could withdraw from the study at any time without providing any justification, and the data were not saved. Only the questionnaire data with a complete set of answers by respondents were considered. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Bioethics Committee of the University of Palermo (n. 4/2020).
---
Participants
Italian individuals over 18 years of age who were living in Italy at the time of quarantine were eligible for participation to data collection. The recruited sample size through the online survey included 2332 Italian adults, with an attrition rate of approximately 20%. Of 2332 who completed the survey 71 respondents were excluded because of missing demographic data, while a further 10 participants were excluded as they were residents outside Italy at the time of data collection. Our final sample comprised 2251 respondents. Demographic characteristics of the study sample are presented in Table 1. The Italian version of the WHOQOL-BREF was used to assess QoL [32,33]. The WHOQOL-BREF is a short version of the WHOQOL-100, developed by the WHO for use in situations in which time is restricted and respondent burden must be minimized, such as in epidemiological surveys. It is a 26 items self-rating questionnaire, and a person-centered instrument, giving scores to overall QoL and its four dimensions: physical health (e.g., sleep quality, energy and tiredness), psychological health (e.g., positive emotion, self-esteem, personal beliefs), social relationships (e.g., social support and sexual activity), and environment (e.g., climate, transportation, and healthcare assistance). Items ask respondents to rate their QoL during the last two weeks and each of them are rated on a 5-point Likert scale. Similarly to the Italian validation study and to the original version of the questionnaire [29,30], internal consistencies for the WHOQOL-BREF were satisfactory, with Cronbach's alpha values ranging from 0.57 for social relationships and 0.79 for physical health. Reliability for the global score of the WHOQOL-BREF was good (Cronbach's alpha = 0.88).
---
Statistical Analyses
Descriptive statistics and frequency analysis were used to investigate demographic characteristics and COVID-related information. Comparisons on these variables by sex (men vs. women) and age range (young, middle, and older adults) were performed using Pearson's χ 2 test and Student's t test for independent samples for nominal and continuous demographic variables, respectively.
Analysis of variance (ANOVA) was used to analyze the difference in respondents' levels of QoL at the global score of the WHOQOL-BREF, while multivariate analysis of variance (MANOVA) was employed to analyze the differences in levels of QoL at domain scores of the WHOQOL-BREF. Statistical analyses were performed using SPSS (version 25) for Windows [34]. In all statistical tests, a p value of less than 0.05 was considered significant.
---
Results
---
Demographic Characteristics
As Table 1 shows, the final sample comprised 2251 participants (74% females) collected mainly from the north (41.3%) and south (53.8%) regions of Italy. Respondents were mostly young (age 18-34) and middle (age 35-64) aged adults (41.7% and 52.2% of the entire sample, respectively), while the group of older adults (age 65 and older) was smaller (6.1% of the total sample). Most of them had a university degree (41.2%) or a high school diploma (36.7%), were employed (62.3%), and either single (46.7%) or married (43.5%). With respect to university students (20.2% of the sample), they were enrolled in either social sciences and humanities (53.2%), biotechnical sciences (29.3%), and medical (14.2%) study programs, while a few students did not report their major (3.3%).
With regards to comparisons between men and women in demographic variables, we found statistically significant sex differences in employment status (χ 2 = 21.25, p < 0.001), level of education (χ 2 = 23.34, p = 0.001), and age range (χ 2 = 7.59, p = 0.022). Particularly, women were less often employed than men, so much so that 80% of the unemployed respondents were women, although with higher levels of education (see Table 1). In fact, women reported more often than men to have a university degree (43.4% vs. 36.0% for women and men, respectively) or a postgraduate title (such as PhD; 14.8% vs. 12.3% for women and men, respectively). Moreover, with regards to age distribution, female respondents were mainly from the group of middle adults, while fewer of them fell into the older adults group compared to men.
Table 1 reports that 136 respondents (6.1%) had a psychiatric diagnosis at the time of data collection, with the highest prevalence in women compared to men (χ 2 = 6.25, p = 0.012). Within this group, 47.1% individuals have been diagnosed with anxiety disorders, 41.2% with mood disorders, while the remaining 11.7% with other conditions (e.g., eating and personality disorders).
Yet, 394 participants (17.5%) reported to be in treatment for a medical condition, mainly for circulatory system diseases (24.1%), such as hypertension and heart failure, and endocrine system diseases (19%), such as diabetes and hypothyroidism. No significant differences in the distribution between men and women were detected (χ 2 = 1.38, p = 0.239).
---
COVID-Related Information
Table 2 shows the results obtained from epidemic-related information. Most participants had their job/study activity moved at home (50.9%), didn't have any family members or friends diagnosed with COVID-19 (93.6%), were always adherent to control and precautions measures against COVID-19 (62.9%), and had a household size of mainly three to four persons (55.3%).
Concerning sex differences among these variables, we found a significantly different distribution of answers between men and women in the adherence to control and precautions measures against COVID-19 (χ 2 = 10.28, p = 0.006). Particularly, most women (64.4%) reported to be more inclined to always adhere to control and precautions measures against COVID-19, rather than often or not that much, compared to men (58.8%). We did not find any significant sex difference with regards to the distribution of changes in job/study activity (χ 2 = 5.12, p = 0.163), presence of family members or friends infected by COVID-19 (χ 2 = 1.25, p = 0.263), and household size during the outbreak of the disease (χ 2 = 6.28, p = 0.099).
With respect to age range differences, we found a significantly different distribution of answers in the variables changes in job/study activity (χ 2 = 74.92, p < 0.001) and household size during the outbreak of the disease (χ 2 = 80.45, p < 0.001). Particularly, young (50.4%) and middle (53.9%) adults reported to have mainly their job activity moved at home, as well as a household size of three to four persons during lockdown (58.7% and 55.0% for young and middle adults, respectively), compared to older adults who reported no changes in job or job moved at home to the same extent (29% for both), and a house composition of mainly two persons (44.2%). No significant age differences in the adherence to control and precautions measures against COVID-19 (χ 2 = 1.57, p = 0.815), nor in the presence of family members or friends infected by COVID-19 (χ 2 = 3.62, p = 0.163), were detected.
---
Quality of Life during the Outbreak of COVID-19
Table 3 presents means and standard deviations for WHOQOL-BREF global and domain scores. The overall average score at the WHOQOL-BREF for our sample was 54.48 (SD = 7.77). Analyses performed on the single items, showed that the item with the lowest scores was 14 (about the use of spare time), given that 932 (41.4%) participants reported to have little or no time for leisure at the time of data collection; said item refers to the domain environment of the WHOQOL-BREF. Regarding the other three domains of the WHOQOL, items with lowest scores were: item 15 for the physical domain, as 1019 (45.3%) participants reported little or no possibility to do physical activity; item 5 for the psychological domain, with 712 (31.6%) respondents reporting that they were not enjoying their lives at the time of data collection; and item 21 for social relationships, as 843 (37.4%) respondents reported that they were little or not at all satisfied with their sexual life. Results of ANOVA analyses showed that WHOQOL global scores differed between male and female participants (F (1, 2250) = 9.34, p = 0.002), with women reaching lower scores compared to men. No significant differences were found for age range (F (2, 2250) = 1.91, p = 0.148). About the factor scores of the WHOQOL, two separate MANOVAs were run by taking into account sex and age range as the only between-subject factor. The model where sex was considered showed a significant main effect for this variable (F (1, 2250) = 13.51, p < 0.001); between-subject tests showed significant differences between men and women in the areas of physical (F (1, 2250) = 17.58, p < 0.001), psychological (F (1, 2250) = 25.85, p < 0.001), and environmental (F (1, 2250) = 7.00, p = 0.008) domains. As can be seen in Table 3, women reported overall worse psychological, physical, and environmental QoL during the pandemic compared to men.
Age range also resulted in a significant between-subject factor for the detection of differences across WHOQOL-BREF domains (F (1, 2250) = 11.93, p < 0.001). About this, results showed significant differences among groups in the psychological (F (2, 2251) = 11.69, p < 0.001) and environmental (F (2, 2251) = 11.96, p < 0.001) domains. Particularly, young adults reported the lowest levels of psychological QoL, which were significantly lower compared to both middle (p < 0.001) and older (p = 0.019) adults, as attested by Bonferroni's post hoc comparisons. As shown by Table 3, middle adults had the lowest scores at the environment domain compared to both young (p < 0.001) and older (p = 0.005) adults. No significant differences emerged in both physical (F (2, 2251) = 0.39, p = 0.675) and social relationship (F (2, 2251) = 1.82, p = 0.161) domains.
---
Differences in Demographic and COVID-Related Variables
The effects of 10 further relevant variables (i.e., area of residence in Italy, level of education, marital status, employment status, currently diagnosed with psychiatric condition, currently diagnosed with medical condition, changes in employment status and location, family member or friend infected with Sars-Cov-2, adherence to the precautions and control measures, household size during COVID outbreak) were tested over WHOQOL global and domain scores. In light of the results on sex and age range, sex was controlled in all additional ANOVAs, while both sex and age in all MANOVA models. Table 4 presents means, standard deviations and statistics of ANOVA and MANOVA analyses. Overall, no interaction term was significant, therefore statistics were not reported within the Table .
As reported by Table 4, results show that seven out of ten variables significantly differed in WHOQOL global score (global level of QoL), while five other WHOQOL factor scores did not (physical, psychological, environmental health, and social relationships; p < 0.05). Overall, three variables, namely marital status, family member or friend infected with Sars-Cov-2, and household size during COVID outbreak, had no significant effect over both global and factor scores of the WHOQOL (ps = n.s.).
Regarding WHOQOL global score, results from Table 4 show that individuals with the poorest QoL during the outbreak of the disease (as their global score of the WHOQOL was significantly lower compared to the other groups) had the following characteristics: lived in the South of Italy, had lower education levels (secondary or high school diploma), were unemployed or university students, had been diagnosed with psychiatric and medical syndromes, had their job activity suspended, and did not comply with the restriction measures to contrast COVID-19 pandemic. With respect to the factor scores of the WHOQOL, significant effects were found for the following variables: area of residence in Italy, level of education, having a diagnosis of a medical condition, changes in employment status and location, and for adherence to precaution measures. None of such effects pertained the dimension of the WHOQOL assessing social relationships (all ps = n.s.). When area of residence in Italy was considered, between-subject tests revealed that only the differences pertaining the dimension of environmental health were significant (F (2, 2250) = 11.16, p < 0.001), with respondents living in the south reporting overall worse conditions of their environment, which were significantly different compared to respondents from the north of Italy (p < 0.001).
Between-subject tests for level of education showed that environmental (F (3, 2145) = 5.43, p = 0.001) and psychological health (F (3, 2145) = 3.45, p = 0.016) were significantly different across groups. Particularly, Bonferroni post hoc tests showed that individuals with a high school diploma had significantly lower levels of psychological health compared to respondents who had either a university degree (p = 0.028) or a postgraduate title (p < 0.001). Yet, individuals with a postgraduate title reported the highest scores for environmental health, which were significantly different to that of individuals with a secondary (p < 0.001) or high school (p < 0.001) diploma, as well as with a university degree (p = 0.040).
With respect to medical conditions, between-subject tests showed that physical (F (3, 2145) = 8.91, p = 0.003), psychological (F (3, 2145) = 4.03, p = 0.045), and environment (F (3, 2145) = 4.90, p = 0.027) domains of QoL were significantly lower for those respondents reporting a diagnosis of a medical condition.
Between-subject tests relevant to changes in employment status and location showed significant differences across groups in both physical (F (3, 2250) = 5.97, p < 0.001) and psychological domains (F (3, 2250) = 4.21, p = 0.006). Specifically, respondents who were unemployed prior to the COVID-19 outbreak reported worse levels of both physical and psychological health, which were significantly lower compared to individuals who had their job/study activity with no changes (p < 0.001 for both physical and psychological domains) or moved to home (p = 0.001 and p = 0.012 for physical and psychological domains, respectively).
With respect to the variable adherence to control measures, between-subject tests showed that the domain environment (F (3, 2145) = 6.15, p = 0.002) was significantly different across groups, with individuals who reported lower levels of adherence to control measures having the poorest QoL pertaining to environment, compared to respondents who reported either always or often (both ps < 0.001).
---
Discussion
The study aimed to assess the impact of the COVID-19 pandemic and lockdown measures on QoL in a large Italian sample. The main objective was to investigate possible differences in QoL levels related to both demographic and pandemic-specific factors, with particular attention to physical, psychological, social, and environmental dimensions of QoL. Our results show a number of significant differences in QoL levels related to several relevant variables.
Although the WHOQOL does not have cut-off scores allowing a precise definition of QoL as "poor" or "good", and despite the absence of recent data available on Italian QoL assessed with the WHOQOL, already existing literature can be taken into account to make some general considerations. Our results showed that, during the lockdown period, the mean of both the global and dimensions scores of the WHOQOL were lower compared to those obtained by both the Italian validation study of the questionnaire [33] and an international study comparing the main psychometric properties of WHOQOL-BREF among 23 countries [31]. Along this line, it is interesting to note that our results showed a poorer QoL for our sample compared to the data reported by another Italian study, in which the goal was to estimate QoL changes over an 18-month period in an adult population sample after the L'Aquila 2009 earthquake [35]. These results emphasize that the current situation due to the pandemic emergency and the lockdown measures had a severe impact on the QoL of the Italian general population, as confirmed by ISTAT (The Italian National Institute of Statistics) report [36]. It was, and still is, an actual collective trauma. In fact, although only 7.4% of the respondents reported to have a friend or relatives hit by COVID-19, we did not find significant differences in QoL compared to participants who had no friends or relatives infected by the virus. People's lives during lockdown were affected by an abrupt and sudden change in their habits, a sense of precariousness, the indefiniteness of the future, and a strong worry for their health. All these factors may have affected general QoL levels.
Looking into this even further, we found that the items that overall had the lowest scores were: "To what extent do you have the opportunity for leisure activities?" (item 14-environment dimension), "How well are you able to get around?" (item 15-physical domain), "How much do you enjoy life?" (item 5-psychological domain), and "How satisfied are you with your sex life?" (item 21-social domain). Through these items, it is possible to grasp the considerable impact that the lockdown measures have had on the dimensions of life satisfaction and pleasure, favoring an impairment of the ability to enjoy life. Particular attention should be given to the psychological domain, which seems to indicate a relapse to depressive nuances related to the loss of pleasure for one's life. Furthermore, it might be that the shelter-in-place order could have led to restrictions in physical activity behavior [6], with a possible significant negative impact on psychological well-being and QoL. In fact, recent literature suggests that daily physical activity helped to offset the psychological burden and negative emotions caused by COVID-19 pandemic [6][7][8]. A possible explanation is that regular exercise is linked to change in hypothalamic-pituitary-adrenal (HPA) axis, with reduced adrenal, autonomic, and psychological responses to a psychosocial stressor [37].
With respect to the influence of demographics on QoL, results showed significant differences between men and women. In line with the literature on QoL, women reported overall worse psychological, physical, and environmental QoL during the pandemic compared to men [31,33]. For instance, Girgus and Yang [38] showed that women's increased psychological vulnerability might be due to a higher tendency to ruminate and to use internal attribution for negative events. Pineles, Hall, and Rasmusson reported more cognitive symptoms of PTSD, such as self-blame, in women compared to men [39]. It is important to notice that in our sample, 80% of unemployed respondents were women, although with higher levels of education than men. Yet, within the 6.1% of respondents that had a psychiatric diagnosis, the highest prevalence was represented by women. With this regard, epidemiological data have shown that in Italy, despite a higher longevity, women get more illnesses and tend to have a lower quality of physical and psychological health than men [40,41]. According to Bekker [42], gender differences in health-related phenomena can be explained through a holistic approach, in which the relationships between biological sex, gender, and health are various, diverse, operative at many levels, and complex. In fact, this relationship can be moderated by daily life or social circumstances, person-related characteristics, and healthcare factors [42]. With respect to daily life and social circumstances, we can assume that, as a consequence of school closures, during the COVID-19 lockdown Italian women experienced a greater overload in care and work, favoring an organizational family shock [35,43].
With regards to age range differences, young adults (18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34) reported the lowest levels of psychological health, which were significantly lower compared to both middle and older adults. Middle adults had the lowest levels of environment dimension compared to both young and older adults. No significant differences emerged for both physical and social domains. Compared to other age groups and in the context of the pandemic, younger adults represent the most psychologically fragile subjects. Additionally, their age is characterized by important transformations (starting university, graduation, first access to work, precarious work condition, unemployment, sentimental projects), which during the pandemic situation might have exposed them to higher risks for their psychological wellbeing. Students, unemployed young people, or young people in the process of building a family or achieving working objectives have suddenly seen a threat to their projects and prospects for the future (finding a job, getting married). Young adults have certainly experienced more negative emotions and loss of self-confidence, with a possible impact on reasoning ability, learning, memory, and concentration, for example for university performances. In fact, emotional skills are crucial to cognitive processes as they affect cognitive styles, use of learning strategies [44], and, consequently, performance [45].
Other studies conducted during lockdown [19,23,25] showed a lower QoL and high levels of stress, anxiety, and depression in younger adults. Pieh and colleagues [23] reported a clear age-related effect in all tested mental health scales, in which the younger adult groups showed the worst scores, in contrast to a previous study before COVID-19. The authors hypothesized various explanations for these findings, such as more uncertain conditions and financial difficulties that occurred in COVID-19 lockdown. According to Horesh and co-workers [25], instead, older age seemed to act as a protective factor for psychological health and this could be attributed to their richer life experience [46] and a possible reduced fear of illness and death, despite the fact that the elderly are constantly being identified as a high-risk population [26,[47][48][49]. Middle adults showed less impact on mental health but greatest dissatisfaction with the availability of financial resources, accessibility and quality of health and social care [26,27], the domestic environment conditions, access to information and sense of safety for their own health regarding to the physical environment, and to the possibility to access to means of transport in safety, compared to younger and older ones.
During lockdown, about 50% of young people and 53% of middle adults underwent changes in work conditions (moved home). This can also explain the dissatisfaction about housing conditions, in which simultaneously parents and children shared the same spaces to carry out their activities, with a probable lack of personal space, but about 18% of middle adults and about 14% of older had to stop their work activities, and this could have led to dissatisfaction with their own financial resources, with these not being considered adequate to meet their needs. In addition, in the first weeks after the declaration of emergency state, mass media were overwhelmed by information, which was not always accurate given the little knowledge on the contagion and the care of COVID-19. People probably felt a sense of uncertainty, confusion, and serious threat for their own physical safety. High intolerance of uncertainty has been found to exacerbate the relation between daily stressors and increased anxiety [50] and, not unexpectedly, increased intolerance of uncertainty as well as the desire to reduce uncertainty was found to predict increased information seeking and monitoring of a situation [51]. Therefore, obtaining information that only provides uncertain estimates related to viral threats may serve to increase perceptions of uncertainty and thus increase anxiety [5].
Our results also showed that individuals who were living in the south of Italy at the moment of the lockdown, had lower education levels (secondary or high school diploma), were unemployed or university students, were diagnosed with psychiatric and medical syndromes, had their job activity suspended, and did not comply with the control measures to contrast COVID-19 pandemic had the poorest QoL during the outbreak of the disease. It is interesting to point out that southern Italy, during the first period of lockdown, was less affected by the epidemic, yet the population showed lower levels of satisfaction with their general state of life. On the one hand, this can be related with structural differences that have always recorded lower QoL levels in the south than in the regions of northern Italy [52], especially with regard to the environment dimension (availability of financial resources, access to healthcare services, housing conditions, quality of public transport). Starting from these structural differences between the north and south of Italy, it is possible to assume that the population of southern Italy has perceived greater concern and distrust in the ability to cope with the pandemic. To support this, Rossi and colleagues [19] showed higher odds of several psychological outcomes, such as anxiety, depression, perceived stress, and insomnia in people who lived in southern Italy.
In regards to the relationship between low education level and low scores in the quality of life measure, it appears that the most compromised dimensions were psychological health and the interaction with the environment. Skevington [53] reported worse QoL levels in people without education, especially in some areas of QoL (lack of positive feelings; inadequate financial resources; little information and skills; few opportunities of recreation and leisure; weak spiritual, religious, and personal beliefs). Vice versa, most highly educated respondents reported a more positive environmental QoL, in terms of financial resources and physical environment, e.g., pollution and access to information and skills [53,54]. It is conceivable that, during lockdown, a lower educational level probably impaired more well-being because it hindered access to nonalienated paid work and economic resources, and may have reduced the sense of control over one's life, as well as the access to stable social relationships, especially marriage. Then, a lower educational level could increase emotional distress (including depression, anxiety, and anger), physical distress (including aches and pains and malaise), and levels of dissatisfaction.
As to work conditions, individuals who were unemployed prior to the COVID-19 outbreak reported overall worse levels of both physical and psychological QoL, which were significantly lower compared to individuals who had maintained their job/study activity with no changes nor moved to home. These findings are supported by previous studies highlighting a relationship between unemployment and poorer health-related QoL, explained by the economic and social consequences of unemployment [55,56]. Work has a central part in most individuals' lives. It meets the requirements of both material needs (income security and social protection) and social needs (self-esteem and identity, social interaction, time structure, and feeling of purpose and participation in society) [57], and these requirements are further compromised by limitations about job search activities during lockdown [36].
With reference to persons suffering from medical diseases, they reported lower scores in the physical and psychological domains, but also in the interaction with the environment, probably due to the difficulties of access to healthcare services (e.g., concern about cancelled/postponed care). During the pandemic, Italian hospitals were converted into COVID hospitals, and entire wards and surgeries were closed, making it difficult to access for all those with chronic or acute non-COVID-19 medical conditions. Furthermore, as assumed by Van Ballegoijen and co-workers [27], patients could have been anxious to visit their physician due to fear of infection or to avoid further burdening the healthcare system. This could lead to secondary healthcare problems, such as delay in diagnosis of critical medical conditions and exacerbation of existing health conditions. Horesh and colleagues [25] hypothesized that having a pre-existing medical condition is associated to distress, because COVID-19 is more dangerous for those with existing illness and, for that reason, these patients may have felt more vulnerable.
Most of our participants said they adhered to the government-enacted measures much or very much, and there was a significant difference between women and men in favor of the former. These data are in line with the study of Carlucci, D'Ambrosio, and Balsamo [58], where it was assumed that the increased adherence of women to containment measures can explain sex differences in mortality and vulnerability [59,60] to the COVID-19 disease. In this case, women's adherence has been a protective factor. As suggested by findings from previous studies regarding age and gender patterns of risk-taking behaviors [61,62], men would be more likely to engage in risk taking behaviors.
Finally, the present results have also highlighted that people who felt a greater dissatisfaction in all areas of QoL, especially the environment dimension, had a lower adherence to containment measures. After all, QoL is given by the interaction between environmental and personal factors, and it is possible that people who have perceived higher dissatisfaction with the availability of financial resources, physical safety, and accessibility and quality of health and social assistance may have had a more passive attitude linked to the sense of helplessness, concerning the real possibility that their personal contribution could contain the spread of contagion. Moreover, feelings of helplessness and passivity in dealing with the threat may result from high perception of risk that can promote the adoption of strategies to minimize infection [63].
---
Conclusions
There are limited international studies that have investigated how severe the impact of COVID-19 pandemic is on QoL and to our knowledge there have been no studies on the Italian population [23,[25][26][27][28]. We believe that the assessment of QoL represents an important indicator of global health, which allows us to grasp the state of health of a population in a multidimensional way, especially in this particular moment in which all the dimensions of life have been disrupted.
Our study highlights significant differences in QoL and its dimensions (physical, psychological, environmental, and social) depending on a number of variables, including sex, age, status of employment, area of residence in Italy, and being diagnosed with a medical/psychiatric condition during the COVID-19 pandemic and lockdown. Strengths of the present study include the focus on a large Italian representative sample, which could be reached in a relatively short time period since the pandemic situation developed rapidly, and the use of an internationally validated questionnaire. Of course, the present study has some shortcomings, such as gender imbalance, cross-sectional data collection, the lack of information on the population of the central regions of Italy, and no exclusion criteria except minors under the age of 18 and those not living in Italy during COVID-19 lockdown.
We are aware that we have analyzed only some of multiple aspects that influence QoL and many others should be tested and considered in further research, such as the role of physical activity on psychological well-being. However, based on our findings, attention should be given to people showing a combination of risk factors, including younger age, female gender, unemployed status, having a pre-existing illness, and living in the south of Italy, thereby assisting them in coping with the pandemic, especially now that the continued exposure to the epidemic and to the necessary measures to contain it, above all in Italy, could lead to further impairment of the people's quality of life.
We believe that subjective well-being measures are needed to assess a society's population and it is important to add them to the health and economic indicators that are now favored by policymakers. Such measures include QoL, which may be conceptualized as a multidimensional construct that is influenced by personal and objective factors, as well as by their interactions. The subjective evaluation that people make about their living conditions, their expectations, and their beliefs, could also play a very important role for the adherence to both contagion containment measures and vaccination.
Actually, health authorities have devoted relatively little attention to the identification and management of psychological and social factors likely to significantly influence a person's QoL. Our results can offer guidelines regarding which social groups may be at a high risk of decreasing QoL, revealing areas of vulnerability during a pandemic. This line of research is particularly important for the management of public health interventions, especially in regards to the need for an optimal allocation of resources. Findings suggest the following recommendations for future interventions: (1) more attention needs to be paid to vulnerable groups such as the young, women, unemployed, and people living in the south of Italy, implementing psychological interventions for vulnerable individuals who cope with the long-term consequences of this pandemic; (2) accessibility to medical resources and the public health service systems should be further strengthened and improved; (3) comprehensive crisis prevention and psychological intervention are needed to reduce distress and prevent further impairment of QoL.
---
Data Availability Statement: Data available on request due to restrictions (privacy).
Acknowledgments: With grateful thanks to Marianna Franco, Simona Piraino, and Sofia Scordato to help us to data collection.
---
Conflicts of Interest:
The authors declare no conflict of interest.
---
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. |
Background: The aim of the present study was to investigate the relationship between socio-economic status (SES) and peritoneal dialysis (PD)-related peritonitis. ♦ Methods: Associations between area SES and peritonitis risk and outcomes were examined in all non-indigenous patients who received PD in Australia between 1 October 2003 and 31 December 2010 (peritonitis outcomes). SES was assessed by deciles of postcode-based Australian Socio-Economic Indexes for Areas (SEIFA), including Index of Relative Socio-economic Disadvantage (IRSD), Index of Relative Socio-economic Advantage and Disadvantage (IRSAD), Index of Economic Resources (IER) and Index of Education and Occupation (IEO). ♦ Results: 7,417 patients were included in the present study. Mixed-effects Poisson regression demonstrated that incident rate ratios for peritonitis were generally lower in the higher SEIFA-based deciles compared with the reference (decile 1), although the reductions were only statistically significant in some deciles (IRSAD deciles 2 and 4 -9; IRSD deciles 4 -6; IER deciles 4 and 6; IEO deciles 3 and 6). Mixedeffects logistic regression showed that lower probabilities of hospitalization were predicted by relatively higher SES, and lower probabilities of peritonitis-associated death were predicted by less SES disadvantage status and greater access to economic resources. No association was observed between SES and the risks of peritonitis cure, catheter removal and permanent hemodialysis (HD) transfer. ♦ Conclusions: In Australia, where there is universal free healthcare, higher SES was associated with lower risks of peritonitis-associated hospitalization and death, and a lower risk of peritonitis in some categories. | PDI
JULY 2015 -VOL. 35, NO. 4 SES AND PD PERITONITIS by occupational status, rental status, living arrangements (alone or with family), marital status or surface area of residence (13). Finally, the Brazilian Peritoneal Dialysis Multicenter study (BRAZPD) study observed that lower educational level, but not family income, was independently associated with increased risk of peritonitis (14). These studies have all investigated indicators of individual-level SES. To our knowledge, there has been no study investigating associations between area-level SES and peritonitis rates, or any published nation-wide study of associations between SES and peritonitis.
The aim of the present study, therefore, was to investigate the associations between area SES on the rate of peritonitis, time to first peritonitis and the outcome of peritonitis using national registry data in Australia (ANZDATA).
---
MATERIAL AND METHODS
---
STUDY POPULATION
In the present study, all non-indigenous Australian patients from the ANZDATA Registry who received treatment with PD between 1 October 2003 (when detailed peritonitis data started to be collected) and 31 December 2010 were analyzed. The data collected included demographic data, postal codes at the time of commencing renal replacement therapy, cause of primary renal disease, comorbidities at the start of dialysis (recorded by the patient's attending nephrologist), smoking status, body mass index (BMI) (< 18.5, 18.5 -24.9, 25 -29.9 and ≥ 30 kg/m 2 ) and late referral (defined as commencement of dialysis within three months of referral to a nephrologist). The data were collected throughout the calendar year by medical and nursing staff in each renal unit and submitted annually to the ANZDATA Registry. Indigenous patients (Australian Aborigines and Torres Strait Islanders) were excluded because residential postal code at the commencement of PD does not always reflect their usual place of residence (15).
---
SOCIO-ECONOMIC STATUS
Socio-economic status was obtained based on Australian Socio-Economic Indexes for Areas (SEIFA) from the Australian Bureau of Statistics (ABS) (http:// www.abs.gov.au/websitedbs/D3310114.nsf/home/ Seifa_entry_page) similarly to previous investigations (16,17). Socio-Economic Indexes for Areas are summary measures of a number of variables that represent different aspects of relative socio-economic disadvantage and/ or advantage in a geographic area. They provide more general measures of SES than are given by measuring income or unemployment alone. In this study, postal codes (postcodes) were used as the area unit. Postcodes were ranked into deciles, based on each of the four SEIFA variables. Decile 1 is the most deprived or disadvantaged group of postcodes. A summary measure for a particular community was created by combining information about the households and individuals who live in that area based on Australian Census data. Each of the four available SEIFA variables were separately evaluated: a) Index (2). Peritonitis rate was defined as the total number of episodes of peritonitis per number of years of PD therapy (episodes per patient-years at risk). In keeping with ISPD recommendations (1,3), relapsed peritonitis was counted as a single episode and patients with a PD catheter in situ who were not receiving PD were not included in peritonitis rate calculations.
The clinical outcomes examined were peritonitis cure, peritonitis-associated hospitalization, catheter removal, temporary hemodialysis (HD) transfer (in which patients subsequently resumed PD without a time frame requirement), permanent HD transfer and peritonitis-related death. A peritonitis episode was considered 'cured' by antibiotics alone if the patient was symptom free, the PD effluent was clear and the episode was not complicated by relapse, catheter removal or death. Peritonitis-related death was defined as any death within 30 days after an episode of peritonitis (18).
---
STATISTICAL ANALYSIS
Categorical results were analyzed using chi-square tests and presented as frequencies and percentages. Normally distributed results were analyzed using ANOVA and presented as mean ± standard deviation (SD). Non-normal continuous variables were analyzed using Kruskal-Wallis tests and presented as median (25 th -75 th percentile). Predictors of rates of PD peritonitis were determined by mixed-effects Poisson regression with initial PD hospital treated as a random effect (17). The independent predictors of the clinical outcomes of peritonitis were determined by mixed-effects multivariable binomial logistic regression model with both initial PD hospital and patient treated as random effects. Mixed models are one of the standard tools for the analysis of clustered data where a sample of cases is repeatedly assessed (19). The covariates included in all the models were SEIFA deciles (IRSAD, IRSD, IER or IEO), age, gender, racial origin, BMI, late referral within three months of dialysis commencement, end-stage renal failure cause, smoking status and comorbidities, and estimated glomerular filtration rate. Initial empiric antibiotic regimens were added in the model of peritonitis outcomes as covariates. All the models were run separately for each SEIFA index. Data were analyzed using the software packages PASW Statistics for Windows release 18.0 (SPSS Inc., North Sydney, Australia) and Stata/SE version 12.0 (StataCorp. College Station, TX, USA). P values < 0.05 were considered statistically significant.
---
RESULTS
From 1 October 2003 to 31 December 2010, a total of 7,419 non-indigenous Australian patients received PD treatment, with 3,585 patients experiencing 7,299 peritonitis episodes. SEIFA were unavailable for recorded postcodes in two patients (four episodes of peritonitis). Consequently, 7,417 patients were included in the analysis and were followed for 16,242 patient-years. Their characteristics are depicted in Table 1.
---
PERITONITIS RATE
The overall peritonitis rate was 0.45 episodes per patient-year of treatment. Calculated peritonitis rates for deciles of each SEIFA variable are shown in Figure 1.
No clear pattern was able to be identified between any of the SEIFA-based deciles and peritonitis rate. Mixedeffects Poisson regression demonstrated that incident rate ratios for peritonitis were generally lower in the higher SEIFA-based deciles compared with the reference (decile 1), although the reductions were only statistically significant in some deciles (IRSAD deciles 2 and 4 -9; IRSD deciles 4 -6; IER deciles 4 and 6; IEO deciles 3 and 6) (Table 2). In a subgroup analysis of gram-positive bacterial peritonitis, no clear or consistent relationship was observed with SES (Supplemental Table 9).
---
PERITONITIS OUTCOMES
A total of 7,417 patients with 7,295 episodes of peritonitis in 3,583 PD patients were included in the final peritonitis outcome analyses. Clinical outcomes of peritonitis within deciles of each SEIFA variable were generally similar (Supplemental Tables 1234), with the exception of significantly lower percentages of hospitalization in decile 10 of each SEIFA variable and significantly shorter hospitalization durations in decile 10 of each SEIFA variable, except IRSAD. Using mixed-effects multivariable logistic regression with decile 1 of each SEIFA variable as reference (Table 3 and Supplemental Tables 5678), after adjusting for other confounding factors, SES did not predict cure of peritonitis, catheter removal or permanent HD. However, the lower probabilities of hospitalization were predicted by better SES advantage status (IRSAD decile 9), less SES disadvantage status (IRSD deciles 7, 9, 10), greater access to economic resources (IER deciles 9, 10) and higher educational and occupational status (IEO deciles 7, 8, 10), respectively. Moreover, lower probabilities of peritonitis-associated death were predicted by less SES disadvantaged status (IRSD deciles 7, 8, 10 and IER deciles 5, 8, 10). present study, a recent large, multicenter study of 2,032 incident and prevalent Brazilian PD patients (BRAZPD) showed that SES based on family income was not clearly associated with peritonitis risk (14). However, contrary to the findings of the present study, lower educational level was associated with heightened time to first peritonitis risk. In contrast, a Hong Kong study involving 102 consecutive incident PD patients demonstrated that peritonitis risk was predicted by receipt of social security payments at PD commencement, although not by occupational status, rental status, living arrangements (alone or with family), marital status or surface area of residence (13). A subsequent study of 1,595 incident PD patients in the USA observed that indices of lower SES, such as unemployment, student status, and renting a house, were independently associated with increased risks of
---
DISCUSSION
This retrospective, multicenter registry analysis found that, compared with the lowest decile of area SES, higher SES was generally associated with lower peritonitis rates, although these risk reductions were only statistically significant in some deciles and they varied both within and between each of the four SEIFA variables used. The highest deciles of SES for each of the four SEIFA variables were associated with lower probabilities of hospitalization and the least disadvantaged decile and the decile with greatest access to economic resources experienced significantly lower probabilities of peritonitis-associated death.
Studies investigating the relationship between SES and peritonitis are sparse. Similar to the findings of the (12). The disparity in findings between the different studies may be explained by the appreciable differences in healthcare, education and welfare systems that exist between the different countries (20,21). Australia provides universal access to governmentfunded free healthcare, heavily government-subsidized medications (with a modest annual "safety net" cap on out-of-pocket expenses incurred by Australian residents), universal mandatory free education to high school, and welfare payments for disadvantaged groups (such as unemployed, elderly, and people with disabilities). These factors, which are not uniformly present in the countries of the other studies, may have significantly mitigated the impact of lower SES on PD peritonitis risk and outcomes. In contrast, SES may be expected to have more impact in countries which require individuals to make significant copayments towards their healthcare, thereby potentially disadvantaging patients from lower SES backgrounds who cannot even afford small out-of-pocket expenses. For example, disadvantaged US citizens are less likely to have insurance, and may face significant out-of-pocket costs for many services (17). Consequently, the observed associations between SES and peritonitis risk may be healthcare system-specific, such that the results of the present study may not be generalizable to other countries with appreciably different healthcare systems. Another potential factor accounting for the observed differences in impact of SES on peritonitis risk and outcome in Australia compared with other countries may relate to differences in methods used to evaluate SES. The present study utilized Socio-Economic Indexes for Areas (SEIFA) rather than individual-level SES, used in previous studies, which, by utilizing up to 21 different census variables for each SES index, provided a more comprehensive assessment of SES than the limited number of single variables used in other studies (such as family income, educational level, house rental, etc.).
To our knowledge, the present study is the first to have investigated the effect of SES on peritonitis outcomes. Although no association was observed between SES and rates of peritonitis cure, catheter removal or permanent HD transfer, we found that groups with the least SES AND PD PERITONITIS socio-economic disadvantage and the greatest access to economic resources experienced significantly lower risks of both hospitalization and peritonitis-associated death, in spite of the availability of universal access to free healthcare. Previous investigations have shown that Australians from advantaged backgrounds were more likely to have additional health insurance (22) and more likely to receive longer consultations with general practitioners (23). These factors may have contributed to superior peritonitis outcomes in the highest SES decile in the current study. For example, it is possible that higher SES patients may have had better access to healthcare leading to earlier presentation with peritonitis symptoms, earlier diagnosis and treatment, and ultimately better outcomes. Alternatively, other factors such as lesser household crowding in higher SES patients may have been operative. These hypotheses were unable to be tested in the present study due to the limited data collected by the ANZDATA Registry. In addition, there is likely to be differential selection of patients who commence PD in Australia. PD is uncommon in privately-funded hospitals (24) and is more common among patients from remote areas of Australia, who have generally lower SES than city dwellers (25).
The strengths of this study include its very large sample size and inclusion of virtually every Australian patient receiving PD during the study period. SES was evaluated using four indices, which in turn include a range of factors, rather than being primarily related to income. Moreover, a range of peritonitis outcomes was In conclusion, in Australia, where there is universal nearly-free healthcare, higher SES was associated with lower risks of both peritonitis-associated hospitalization and death, but similar risks of peritonitis cure, catheter removal and permanent HD transfer. The effect of SES on peritonitis rates was uncertain as the general reductions in peritonitis rates observed in higher SES categories were modest and only statistically significant in some (but not all) categories in a manner which varied within and between each of the four SES variables examined. Further research evaluating strategies for overcoming poorer peritonitis outcomes in socio-economically disadvantaged patients is warranted. |
commercialized by changing the meaning of politics and ideologies (Özcan, 2001).Football, which is often regarded as the most thrilling sport and a significant aspect of many people's lives, has implications for many people that go beyond those of a game and a pastime (Aydın et al., 2008). This fan phenomenon has been the biggest benefactor for the football industry. Supporting is essentially a civic ritual carried out by participants who have a strong attachment to the team (Eker, 2010). On the other hand, fanaticism is a behavior that appears in a variety of contexts, such as politics and sports. The word "fanaticism" comes from the Latin word (fanum), which means "temple" or "holy place." The word "fanaticus" is used to characterize those who are literally and utterly insanely committed to the temple. The word "fanatic" in English refers to a person who has wild, illogical, or religious impulses (Oxford Dictionary of English, 2018). | INTRODUCTION
---
Sport
As a necessary component of life, sports have developed into a significant subject of study and practice that has a significant impact on the person, society, and the social structure. Sports activities have sparked the development of significant and universal values like camaraderie, solidarity, and tolerance that support people's long-term social and personal growth. Sports activities have a significant role in combating against illnesses brought on by modern lifestyles (Balcı et al., 2018).
Sports is an important phenomenon that supports people physiologically, psychologically, and sociologically in their lives, as well as many common norms and unites individuals in a common characteristic.
Sports events, which are an inseparable part of daily life, appear as a human event that consists of the trio of spectators, champions and medalists, whose visual aspect is prominent, which drags the masses after them, and which can be Fanaticism or fanatical behavior has been researched for many years (Dwyer et al., 2018). Independent fanaticism represents the sport between matches and teams. Having the knowledge of revealing the identity of people who follow football and getting help, in addition to this, they can become losers with the indispensable passion of football (Murphy et al., 1990). Groups that receive fanaticism enhancements may engage in choices, antisocial or violent behavior (Dalpian et al., 2014). Fanaticism, on the other hand, exhibits extreme actions, including violence, in a socially undesirable framework, in contrast to the way that love and attachment to a team are shown in the context of societal acceptance (Kazan, 2009).
The size of the sports industry, its cultural support, the large number of spectators and its strong economic impact appear as one of the most important factors that increase this interest and feeling of admiration (Naumenko, 2018). Fanatic viewers may resist losing competitively or failing the relevant sports service, resulting in apathy or pessimism towards the team (Jovanovska, 2020). In this case, since fanatic behavior can turn into violence, the necessity of managing the interest in question arises. Because the concept of sport is a social phenomenon and continues to exist in order to support healthy individuals with the reduction of violence in society. This purpose also conforms to the living principles of the modern world (Baltas, 2021).
---
Sport Literacy
The concept of literacy can be expressed as the ability to read and write texts written in alphabet in its most general form (Reinking, 1994). The concept of literacy, with its most basic feature, means having the ability to read written texts in any language, to make sense of what they read and to understand all these. If we look at the definition of literacy, which is understood today, it is the state of having the competence to use a set of communicative symbols that are understood correctly by the society in which one lives (Kellner, 2001). The concept of literacy is a skill based on the correct use of operational symbols interpreted by the general public. The quality of being a communicative symbol interpreted by society is a system that renews literacy and is put forward in line with the expectations of every age (Önal, 2010). From this perspective, although literacy is seen as a skill that meets the conditions and needs of the period, it has an identity that renews itself and its meaning in periodical changes (İşler, 2002).
Since the concept of literacy is found in all areas of life, some types have emerged. For example, information literacy in the sense of having and using the information necessary for life, cultural literacy that explores the factors that make up the society and the causes and causes of these elements, and universal literacy that aims to look at events, situations and phenomena are some of them (Gürtekin, 2019).
Different and multiple applications can occur in education systems. Multiple intelligences, multiple learning environments, multiple perspectives and interdisciplinary perspectives can be given as examples. Literacy, which has taken its place in the lives of individuals and societies, can provide important gains for them. However, many different understandings of literacy may emerge. Literacy used today; can be classified in more than one way. In the emergence of these types, the social status, perspectives, expectations and interests of individuals are effective. Individuals can show their talents in different types of literacy. This shows that their areas of interest are different (Önal, 2010).
The concept, which was defined as "Media Education, Media Pedagogy, Media Education" when it first appeared, is now expressed as media literacy (İnal, 2009). When media literacy is examined as a concept, the first condition is to have the ability to reach media messages, to perceive these messages correctly and to produce new messages. Paker (2009) perceives the concept of media literacy as reading and critically evaluating the incoming messages in detail, understanding the hidden meanings, if any, and producing new messages. Mora (2008), on the other hand, defined media literacy education as a practice that enables children, youth and adults to be evaluated with an inquiring and rational perspective in order to protect them from the current negative effects of the media. Özel (2018) stated that media literacy is different from the classical media and discussed media literacy as the purpose of the message and the stages of formation of this message.
It is a well-known fact that sports have aims such as raising individuals who are physically and mentally healthy, open to change. Sports literacy, which is expressed as the level of proficiency (Demir et al., 2019), which enables to make information-based decisions in the selection of sports equipment used in our lives, is considered necessary for the creation of sports awareness. Due to economic and technological developments, the quality and level of education is increasing day by day; this situation results in an increase in the duties and responsibilities of the students studying (Yıldırım, 2015). This phenomenon of fanaticism among today's sports fans shapes people's tendencies and orientations in many ways. It is seen in the current literature research that this attitude has effects on their lives, from shopping in daily life, to adjusting their daily activity planning according to the matches. Sports literacy is also an important factor when following the teams they support from the media, social media, print media, television and stadiums. Sports literacy of students also affects their fanaticism. In line with the interests, needs and expectations of the students, their follow-up status is shaped accordingly. It is thought that the interaction between sports literacy and fanaticism is affected by many variables such as the age of the students, their gender, the purchasing preferences of the teams' licensed products, the team they follow and the way they follow their teams.
In this study, it is aimed to examine the fanaticism levels of university students in terms of sports literacy and some variables in the light of current literature information. When the existing literature is examined, it is thought that the studies on this subject are insufficient. As a result of the study, it is aimed to contribute to the literature by obtaining information about the sports literacy of university students in terms of gender variable, age variable and preference to follow their teams, the variable of the team they support, and the variables of purchasing the licensed products of the team they support. This study has an important place in determining the fanaticism levels of university students and in terms of giving us information in line with the variables mentioned above. In addition to these, it is expected that our study results will contribute to all the studies that have been done and will be done in the field.
---
Research Questions
The questions to be answered during the research are as follows:
1. Is there a difference in the level of fanaticism of the participants in terms of gender variable? 2. Is there a difference in the level of fanaticism of the participants in terms of the age variable? 3. Is there a difference in the level of fanaticism in terms of the participants' preference to follow their teams? 4. Is there a difference in the level of fanaticism in terms of the team variable that the participants support? 5. Is there a difference in the level of fanaticism in terms of the participants' preference for purchasing licensed products? 6. Is there a relationship between the variable of following the teams of the participants and sports literacy?
---
METHOD
In this section, information on the study model, study group, data collection tools, collection of data and analysis of data sub-sections are presented.
---
Study Model
This study was carried out with the general scanning model. It is widely used in quantitative research methods. Büyüköztürk et al. (2013), the data obtained from the method can be easily observed, measured and analyzed.
It is a research method with an experimental approach. In addition, the general survey model successfully reflects the survey by reaching the entire population or reaching a representative sample of the population (Karasar, 2012;Şimşek, 2012).
---
Participants
The
---
Data Collection Tool
A personal information form developed by the researchers was used to collect information about the independent variables. In addition, the Football Fan Fanaticism Scale (FFFS) was used as a data collection tool. The answers given to the "Football Fans Fanaticism Scale" (FFFS), prepared in a Likert type and consisting of 13 positive items, developed by Taşmektepligil et al. (2015), were "a) I strongly agree"(1), "b) I agree"(2), " It consists of answers with four options as c) I do not agree(3) and "d) I do not agree at all" (4). When the total scores of the scale are examined, those with a total score of 1-13 are Not Supporters at all, those with a score of 14-26 are Team Supporters, those with a total score of 27-39 are Fanatic, and those with a total score of 40-52 are Extremely Fanatic. There is no sub-dimension of the scale. The Cronbach alpha reliability value for the overall scale was found to be 0.84. Football fan fanaticism scale. Taşmektepligil et al. (2015) was taken from its source and included in this study.
---
Data Analysis
While evaluating the data in our study, SPSS Statistics 25 program was used for statistical analysis. While analyzing the study data, descriptive statistical methods (frequency, mean, standard deviation and percentage) were used. While evaluating the hypothesis tests, first of all, Skewness and Kurtosis values were examined to determine the normality of the data. Since the data obtained were between +1.5 and -1.5, it was accepted that the data formed a normal distribution (Tabachnick and Fidell, 2013). Therefore, Independent Samples T-Test and One Way ANOVA Test were used. When comparing multiple groups, Levene's Test was applied and Scheffe's test, which is one of the Post-hoc tests, was used. The results were accepted as 95% confidence interval and the level of significance was accepted as p<0.05.
---
FINDINGS
When Table 1 is examined, it is seen that the majority of the participants are male (63.3%). Looking at the age groups, it is seen that the 18-20 age group (59.2%) has the highest rate. When the groups within the variable of using licensed products are examined, it is seen that the group answering No (62.5%) is higher. When the variable of preference to follow their team is examined, it is seen that the group giving the Internet answer (43.3%) is higher. When we examine the table in terms of the team variable that is supported, it is seen that the group giving the answer to Galatasaray (38.5%) has a higher rate. When examining whether there is any significance between the gender variable of the participants and the ttest results, a statistically significant difference (p<.05) was observed in favor of the male participants (Table 2).
Considering the age variable of the participants, the ANOVA results shown in Table 3, it is observed that there is a p<0.05 and a significant relationship between them. When the relations between the groups are examined, it is seen that there is a significant relationship between the a-b, a-d, b-d, c-d groups.
Regarding the licensed product use variable of the participants, there was no statistically significant difference between the football supporter and fanaticism scale (Table 4).
When the variables of the preferences of following the teams and the one-way ANOVA test results (Table 5) are examined in terms of the football fanaticism, it is observed that the age variable is p<0.001 and there is a significant relationship between them. When the relationships between the groups are examined, it is seen that there is a significant relationship between groups a-c, a-d, b-c, b-d (p<.05).
In terms of the team variable of the participants (Table 6), there was no statistically significant difference between the participants in terms of the Football Fans Fanaticism Scale (p>.05).
---
DISCUSSION AND CONCLUSION
The level of football fanaticism of university students in terms of gender, age, using licensed products, the type of following the competitions and the team variables they support, and sports literacy were examined.
When we analyzed the fanaticism levels in terms of gender variable, a statistically significant relationship was p<0.05* found. Considering this significance, it is observed that the average scores are in favor of male participants. When the studies in the literature are examined, Bahçe and Turan (2022) concluded that the level of fanaticism in favor of male participants was high in their research. On the other hand, Yıldız and Açak (2018) found a significant gender variable in their study on high school students, but unlike our study, they observed that the average scores of female participants were higher. In addition to these, Dimmcok and Grove (2005) found that there was no significant difference in terms of gender in their study. According to the research, while women find it difficult to include the phenomenon of football in their own lives, men can easily reveal it with a sense of belonging (Doewes et al., 2020). In another study, it was determined that people develop a kind of identity perception on consumption and brands (Fuschillo, 2020). This situation emerges as another factor that strengthens the theory that men can develop a perception of fanaticism according to club advertising and brand value. According to another study on the psychology of football fanaticism, fans are influenced by different phenomena such as advertisements to support their clubs, and this leads them to show higher interest (Budi and Widyaningsih 2021). In another study, which states that the phenomenon of fanaticism shows positive or negative feelings towards the club that is wanted to be supported, it is emphasized that this situation develops with the emergence of violence and the feeling of competition.
According to the research, the behavior of showing interest in violence and approaching criminal elements fuels fanaticism (Agusman and Setiawan, 2018).
When the levels of fanaticism were examined in terms of the age variable, a significant relationship was found. In our study group, it was seen that the average scores of the participants in the 18-20 age range were higher. It was observed that the average scores decreased as the age of the participants increased. Research; emphasizes that the subjects of interest become less important as the age progresses (Brooks, 2018). A different study, which examines the changes caused by sportive admiration on the life cycle and is powered by social identity theory, shows that adult people's emotional well-being and life satisfaction increase as they get older, and accordingly they tend to be less members of clubs or fan groups (Gantz and Lewis, 2021). In another study, which examined the relationship between the feeling of fanaticism and the membership cards and the age factor, with 1547 participants, it was concluded that membership card-style club loyalty can strengthen the feelings of fanaticism at an early age (Setiadi and Franky, 2019). When the literature was examined, Güler (2020) obtained similar results in parallel with our study. Again, Açak et al. (2018) reached similar results in their research. It was observed that there was a significant difference in terms of the age variable that emerged during the period of Kurak (2019).
When the relationship between fanaticism levels and the variable of using licensed product was examined, no significant relationship was found. When the data in our study group were examined, it was seen that the average scores of those who answered no were higher. When the studies in the literature are examined, contrary to our study, Yıldız and Açak (2018) reached significant results in their study on high school students.
When we examine the level of fanatics according to the variable of following the competitions, it is seen that there is a significant difference. When we examine the data in our study group, the highest percentage of the participants follow the competitions of their teams on the internet. When we examine the average scores between the groups, those who watch from the stadium and then those who follow from other channels have higher average scores. Considering this, it is seen that the parties who go to the stadium to support their teams and those who follow their teams with all their means without being limited to this are more fanatical. When the literature is examined, Yıldız and Açak (2018) and Kural (2017) have reached significant results in their studies in terms of this variable.
When the levels of fanaticism were examined in terms of the team variable, no significant difference was found. When the average scores were examined, it was seen that similar scores were achieved. When we look at the studies in the literature, contrary to our study, in the study conducted by Kurak (2020), it was concluded that the fans who supported the Galatasaray team had higher fanaticism scores.
Considering the percentages and frequencies in terms of the variable of students' sports literacy and following their teams, the number of 208 (43.3%) followed on the Internet, 156 (32.3%) watched from the stadium, 108 (22.5%) watched on television, and the number of those who followed all of them 8 (1.7%) was found. According to this situation, it was concluded that students use social media more to follow the developments in sports teams. Katırcı (2009) concluded in his study that male individuals watch television more to follow sports competitions. Although this study is seen as different from the result we found, when the technological conditions of today's technology and the technological conditions of the time are compared, watching sports competitions can be seen as more usual. Today, with the development of technology, following many situations over the internet supports the accuracy of the result we found.
Although the results we found regarding fanaticism differ from individual to individual, some of the students had high levels of fanaticism according to the variables we shared in the findings in our study group, while this situation differed from others. Researchers we looked at in the literature encountered similar results (Brawn et al., 2015;Fagbemi, 2018;Kramer et al., 2018;Pruna and Bahdur, 2016;Iwuagwu et al. 2023).
As a result, in our study, in which we examined the fanaticism levels of university students in terms of some variables, it was concluded that male individuals were more fanatical than females. The fact that male individuals were more interested in football than females played an important role in the emergence of this result. It was concluded that the younger students were more fanatical than the older students. This has played a role in the emergence of this result, as individuals have a higher rate of defining their teams and belonging to them at younger ages, and because they approach more emotional issues while shaping their attitudes. It has been concluded that those who follow the matches of the team they support from stadiums and all channels have higher levels of fanaticism. It is important for this result that individuals with high levels of fanaticism want to watch the matches from the stadium, which is the closest and most supportive place to their team, since they have a high sense of belonging to the team. It has been concluded that the students follow their teams more on social media within the scope of sports literacy. Today, with the internet age, many hobbies and communication are done through social media accounts, so it has played a role in the emergence of such a result.
In our study of the football fanaticism levels of university students living in Turkey, male students are fanatics in terms of gender. In terms of age factor among university students, students between the ages of 18-20 are more fanatical than other age groups. According to the statistical results of the student group in our study, those who watch the matches from the stadium show a more fanatical tendency than the other groups. University students use social media the most while following information about sports.
Our suggestions to the researchers who read our study and our expectations from our study; It is expected that our study will be a source for other studies in the field and contribute to the literature together with previous studies in this field. In future studies, different and similar high school and university student groups can be compared and different results can be obtained. Different results can be investigated by adding different grade levels and age groups to these comparisons. Studies on students living in different countries and studying in different cultures can compare their levels of fanaticism in the context of country and culture. While creating new research by taking into account the results and findings of our study, good and new studies can be made based on the existing data. In addition to fanaticism, researching different emotional states on students and examining the underlying causes of fanaticism with thematic analysis and creating new research plays an important role in the psychological and sociological analysis of fanaticism. |
The aim of this study was to explore the influence of characteristics of nurses and older people on emotional communication in home care settings. A generalized, linear, mixed model was used to analyze 188 audio-recorded home care visits coded with Verona Coding Definitions of Emotional Sequences. The results showed that most emotional distress was expressed by older females or with female nurses. The elicitation of an expression of emotional distress was influenced by the nurses' native language and profession. Older women aged 65-84 years were given the most space for emotional expression. We found that emotional communication was primarily influenced by sex for nurses and older people, with an impact on the frequency of expressions of and responses to emotional distress. Expressions of emotional distress by older males were less common and could risk being missed in communication. The results have implications for students' and health professionals' education in increasing their knowledge of and attentiveness to the impacts of their and others' characteristics and stereotypes on emotional communication with older people.During home care visits, older people reveal essential information about their well-being and health, such as worries and needs (Kristensen et al. | Aim
The aim of this study was to explore the influence of characteristics of nurses and older people on emotional communication in home care settings. The following research question was addressed: Which characteristics of nurses and older people influence emotional communication in terms of: (i) older people's expressions of emotional distress; (ii) elicitations of expressions of emotional distress; and (iii) nurses' responses in providing or reducing space for the further exploration of older people's emotional distress?
---
METHODS
---
Design
This study was an explorative, cross-sectional study of Swedish home care settings. The number of participants and audio-recordings used was decided under the COMHOME project to enable analyses and comparisons between Sweden, Norway, and the Netherlands (Hafskjold et al., 2015).
---
Participants
In Sweden, home care services are performed by registered nurses (RN) or nurse assistants (NA). Therefore, both registered nurses (n = 11) and nurse assistants (n = 20) were included in the study. Inclusion criteria for nurses were employment in home care settings and the ability to speak Swedish. Inclusion criteria for the older people included an age of ≥65 years and being Swedish speaking without speech or cognitive impairments. The number of participants was determined from the COMHOME project. In total, 31 nurses and 81 older people participated in the study.
---
Setting
A convenience sample of 12 home care institutions located in a county of central Sweden were approached for participation to collect data from audio-recorded home care visits between nurses and older people. Eight of these home care institutions agreed to participate. In this study, home care refers to care provided in the home of an older person and involves different activities, such as assistance with daily living tasks, personal care, and medical administration and procedures.
---
Ethical considerations
Ethical approval was obtained from the Regional Ethical Review Board of Uppsala, Sweden (Dnr 2014/018). Participating nurses and older people received oral and written information on the study, their participation, and their rights as participants, and on how the data would be handled, stored, and presented/published. All of the participants had to be able to provide written informed consent to participate. The participants were guaranteed confidentiality.
---
Data collection
Data from audio-recorded home care visits were collected from August 2014 to November 2015. The study was presented to nurses at different workplace meetings, who were then asked to participate. Those willing to participate were then asked to inform and recruit older people who met the inclusion criteria of the study. No information about the nurses and older people who declined was collected. Nurses deciding not to participate cited reasons such as heavy workloads and feeling stressed, and older people choosing not to participate predominantly stated that they did not like the idea of participating.
Naturally occurring communication during the home care visits was recorded by the nurses, who were instructed to wear recording equipment on their upper arm, to start recording when they entered an older person's home, and to stop recording when they left. No directives and information about how the communication was to be analyzed were presented, as this might have affected the Höglander, J., Sundler, A.J., Spreeuwenberg, P., Holmström, I.K., Eide, H., Dulmen, S. van, Eklund, J.H. Emotional communication with older people: A cross-sectional study of home care. Nursing & Health Sciences: 2019, 21(3), p. 382-389 __________________________________________________________________________________ This is a Nivel certified Post Print, more info at nivel.nl 4 communication and risked biasing the data. The older people could be recorded once or several times depending on the organization of home care visits. Each nurse made 1-10 audio-recordings (mean = 6.06); most nurses provided seven audio-recordings. The goal was to collect approximately 200 audio-recordings. Incomplete recordings were excluded (e.g. when the recording device did not work properly). From this data-collection approach, we collected 188 audio-recordings of home care visits each with a duration between 1 and 86 min (mean = 14).
---
Data analysis
The analysis of this study was based on communication previously coded during home care visits (Höglander et al., 2017;Sundler et al., 2017) with the Verona Coding Definitions of Emotional Sequences (VR-CoDES) (Del Piccolo et al., 2011;Zimmermann et al., 2011) and participants' characteristics (age, sex, language, and profession). The VR-CoDES is an instrument for coding patients' expressions of emotional distress (Zimmermann et al., 2011) and health providers' responses to these emotional expressions (Del et al., 2009). The VRCoDES is descriptive and nonnormative; it does not label responses as good or bad (Del Piccolo, 2017).
For the coding process, elicitations were coded based on whether the emotional distress expressed was elicited by a nurse or older person. Thereafter, the expressed emotional distress was coded as either a concern defined as "a clear and unambiguous expression of an unpleasant current or recent emotion where the emotion is explicitly verbalized" (Zimmermann et al., 2011, p. 144) or as a cue defined as "a verbal or non-verbal hint which suggests an underlying unpleasant emotion but lacks clarity" (Zimmermann et al., 2011, p. 144). Examples of verbal hints include words vague/unspecified in describing emotions, words emphasizing physiological or cognitive correlates of an unpleasant emotional state, exclamations, ambiguous words, or a patient's repetition of his/her previous neutral expression. Examples of non-verbal hints include crying, sighing, a trembling voice, or silence after a provider's question (Zimmermann et al., 2011). In conjunction with emotional expressions, immediate responses from nurses were coded and divided into explicit or non-explicit responses, and whether responses provided or reduced space for the further disclosure of given cues or concerns; these were then divided into more categories for the further definition of responses (Del Piccolo et al., 2011).Noldus Observer (NO) XT, version 12.0 was used to code the audiorecordings (Grieco, Loijens, Zimmerman, Krips, & Spink, 2011). NO allows data to be analyzed without the need for transcription. The codes were loaded directly into NO audio files as the expressions occurred in the communication. Inter-rater reliability for the VR-CoDES was established by the first and second authors who separately coded 15 audio-recordings. This score was calculated with Cohen's kappa and resulted in an acceptable level of agreement of κ = .64 (P < .01). The remaining audio-recordings were coded by the first author.
Expressions of emotional distress consisting of cues and concerns, their elicitations, and types of responses (i.e. providing or reducing space) were analyzed, along with characteristics of the nurses and older people, via a generalized linear mixed-model (GLMM) analysis. The GLMM analysis was conducted using IBM SPSS Statistics for Windows 24 (IBM Corporation, 2016) to examine how characteristics of the participants' influenced emotional communication (VR-CoDES). The GLMM analysis was useful when accounting for nested and cluster-related correlations in the data. The nurses and older people participating in the study were recorded more than once, and different nurses could visit the same older people. All analyses of each research question started with an empty model (model 0) containing the intercept and residuals for the nurses, older people, and home care visits. The nurses' and older people's sexes were added as variables (model 1), after which the second languages, professions, and age groups of the nurses and older people were added as variables (model 2). Höglander, J., Sundler, A.J., Spreeuwenberg, P., Holmström, I.K., Eide, H., Dulmen, S. van
---
RESULTS
---
Sample description
In total, 316 expressions of older peoples' emotional distress with nurses' subsequent responses were identified from the home care visits. Emotional expressions, together with elicitations and responses to emotional distress, were found in approximately half of the home care visits and are reported in our previous work (Höglander et al., 2017;Sundler et al., 2017) (Table 1). Expressions of emotional distress occurred in both long and short visits. The shortest visits including expressions of emotional distress were 2 min long (n = 5), of which one included five cues and one concern. The nurses were RN or NA (Table 2). Both female and male nurses and older people participated, and there were visits between the same and different sexes (Table 3). There were no reliable interactions between characteristics used in the models (sex, language, profession, and age), and we thus omitted interactions in the presentation of our results.
---
[Table 1] [Table 2] [Table 3] [Table 4]
---
Influence of nurses' and older people's characteristics on expressions of emotional distress
Expressions of emotional distress during the home care visits were influenced by the sex of the nurses and older people (models 1 and2 in Table 4). An older female (.775) has a slightly stronger effect on expressions of emotional distress than a female nurse (.579). Older females expressed more emotional distress during home care visits than older males. Expressions of emotional distress were also associated with being a female nurse. Female nurses received more expressions of emotional distress than male nurses.
Model 2 had the best fit when explaining some of the variance in the older people (R 2 = 17.73%), but this was mostly due to being female in model 1 (R 2 = 16.45%);82.27% remained unexplained. Model 1 explained more of the variance observed in the nurses R 2 = 24.34%) than model 2 (R 2 = -2.12%), which revealed that more nurse variations remained unexplained by the characteristics under study.
---
Influence of nurses' and older people's characteristics on elicitations of expressions of emotional distress
Older people's expressions of emotional distress were elicited either by themselves or by the nurses. In model 2, the introduction of language, profession, and age variables improved the fit of the model (Table 5). The form of elicitation (i.e. who initiated the cue or concern) demonstrated a significant association with language and profession, whereas age did not. Profession had twice as strong an effect (-1.452) on elicitation as language (.777). Nurses whose native language was Swedish elicited expressions of emotional distress significantly more often in their communication in relation to the older people. Nurses with a second language elicited expressions of emotional distress almost as often as the older people did. Regarding the effects of different professions, RN elicited expressions of emotional distress more often in their communication with older people than NA did.
---
[Tabel 5] [Table 6]
Model 2 was the most beneficial in explaining the nurses' level of variance (R 2 = 75.88%), and fully explained the older people's level of variance (R 2 = 100%). However, home care visit variance was not further explained in model 2 (R 2 = 34.81%), showing that there were variations in home care visits Höglander, J., Sundler, A.J., Spreeuwenberg, P., Holmström, I.K., Eide, H., Dulmen, S. van that were not explained by the characteristics used. Therefore, the models were not beneficial in explaining the variance in home care visits.
---
Influence of nurses' and older people's characteristics on responses to emotional distress
In their responses, the nurses could either provide or reduce space for the further disclosure of older people's emotional distress. Both the sex and age of the older people were found to influence the nurses' responses. A response could either provide or reduce space for the further disclosure of older people's emotional distress (models 1 and 2 in Table 6). Being an older female had a stronger effect (1.237) on the type of response given than older people's age (-1.117). Being an older female was significantly associated with the type of response given by the nurse. Older females mostly received responses that provided, rather than limited, space for their emotional distress compared to older males who received almost as many responses providing space as those reducing space. Older people aged between 65 and 84 years significantly more often received responses that provided, rather than reduced, space for their concerns in comparison to those who were ≥85 years of age.
Regarding random effects, the degree of nurse variance was small; model 2 was the most beneficial in explaining the older people's level of variance (R 2 = 100%). The characteristics shown in model 2 explain more of the variance in home care visits (R 2 = 17.28%) than model 1 (R 2 = .39%), but 82.72% remained unexplained.
---
Discussion
Nurses' and older people's characteristics affected emotional communication differently depending on how emotional distress was expressed and based on who elicited the expressed emotion and the type of response provided by the nurse. Nurses play an important role in providing emotional support, because they can acknowledge and facilitate disclosure and find coping mechanisms for dealing with emotions (Sheldon, Barrett, & Ellington, 2006).
In the first GLMM analysis, being female was associated with the expression of emotional distress. These results do not correspond with previous studies using the VR-CoDES for hospital consultations, where sex was not found to be significantly associated with the occurrence of emotional expression (Eide et al., 2011;Mjaaland, Finset, Jensen, & Gulbrandsen, 2011). This could be partly attributed to the different care contexts of hospitals and home care settings that might influence emotional communication. The findings of this study are more consistent with other work on home care, showing that older females express their concerns and complaints more often than older males (Hellström & Hallberg, 2001). The findings also show that females expressed their emotions more often than males. This could be related to gender stereotypes, with males being reluctant to seek help (Addis & Mahalik, 2003), or to the possibility that older females experience more emotional distress than males.
Females can also be perceived as being more caring and interested in emotional issues. This consideration might help to explain the significantly larger number of expressions of emotional distress from older people received by female nurses than by their male colleagues. In a previous study, female physicians were found to be more engaged in emotional discussion and to facilitate more patient-centered dialogue than male physicians (Shin et al., 2015). It is important to be aware of existing differences (i.e. sex) regarding expressions of emotional concerns, and to acknowledge the possible impacts of stereotypes. Failing to acknowledge the impact of stereotypes can increase the risks of older males' emotional distress going unnoticed. The differences observed between females and males might also be due to females experiencing more disabilities and concerns than males (Hellström & Hallberg, 2001;Newman & Brach, 2001), highlighting the need to explore expressions of emotional distress further.
From the second GLMM analysis, profession and language appeared to influence who elicited the expression of emotional distress. Being a nonnative speaker does not necessarily lead to challenges with communication (Khatutsky, Wiener, & Anderson, 2010). However, native language competence was associated with the elicitation of emotional expression. To establish whether the association with elicitations observed was due to differences in native language competence or perhaps due to cultural differences is difficult. More knowledge is needed on the impact of language and culture on emotional communication. Differences observed in terms of professions might be related to the greater focus of RN on medical and healthcare aspects of their home care visits and in their interactions with older people (Sundler et al., 2017). Discussions and questions regarding older people's perceived health statuses, illness troubles, or medications might elicit expressions of emotional distress in home care visits. These information-seeking questions are important in learning more about how older people perceive their health, but they might also elicit emotional distress.
In the third and final GLMM analysis, the sex and age of the older people presented a significant association with the types of responses provided by the nurses. The age differences observed might be related to age-related differences in how older people express and control their emotions (Gross et al., 1997;Isaacowitz et al., 2017). Nurses might be challenged in their attentiveness to emotional distress when encountering age-related changes. When nurses do not perceive older people's hints of emotional distress, they might offer fewer responses that provide space for such distress. This can be essential in providing space for older people's needs for emotional talk and comfort. Emotional communication is emphasized because older people's abilities to handle their emotions are related to their experience of health (Suri & Gross, 2012). The effects of older people's age and sex on responses providing space for emotional distress might also be related to social norms or stereotypes, which might influence how nurses perceive and respond to emotional expressions. For example, females can be perceived as being more emotional than males (Plant, Hyde, Keltner, & Devine, 2000;Shields, 2013). Such beliefs and expectations might create more space for older females' emotional expressions in communication.
Beliefs and stereotypes can influence our expectations of others and of ourselves. For example, males are perceived to exert greater emotional control than females, but less emotionally understanding (Shields, 2013). These expectations could affect both emotional expressions and the responses that they receive, as interpretations of emotional expressions can be affected by gender stereotypes (Plant et al., 2000). Beliefs and gender-emotional stereotypes, therefore, cannot be neglected when exploring emotional communication in home care settings. These differences need to be acknowledged in emotional communication to develop an awareness of and provide sufficient emotional support for older females and males. Therefore, it is important to help older people talk about their emotions and provide emotional support. Otherwise, unattended emotions and inconsistent comforting might affect older people's experiences of health and well-being.
---
Limitations
As a possible limitation of this study, the presented characteristics do not fully explain the GLMM models. Additional characteristics that might affect emotional communication not covered in the study, such as how long the participants have known one another, working life experiences, and older people's social status and care needs, have yet to be explored. Further research is needed to help ascertain whether the differences observed are due to differences in lived distress levels, stereotypes, or perhaps something entirely different. It should also be noted that the VR-CoDES only focuses on negative emotions. Therefore, positive emotions were not investigated in this study. The audio-recordings used were drawn from a specific Swedish care context and region. Restrictions of the study's generalizability to other contexts or countries could serve as a limitation. However, the data used cover a large and varied sample of audio-recorded communication, revealing the presence and expressions of older people's emotional needs and nurses' responses to these needs. The emotional needs of older people who are receiving care are not limited to a specific Swedish context, which could indicate the study's generalizability to other contexts and countries.
---
Conclusion
The results of this study indicate that emotional communication in home care can be influenced by several factors that might be influenced further by the norms, cultural beliefs, and stereotypes held by the society in which they occur. When emotional communication is affected by stereotypes, there are risks of objectification and of a lack of person centeredness in communication. There are also risks of overlooking the emotional needs of older people and of inequality in the emotional support provided.
---
Practical implications
The results of this study could raise awareness of the influence of nurses' and older people's characteristics on emotional communication in home care settings. This entails both an awareness of one's own characteristics and of those of others, and of how they impact communication. The results can further be used in education settings to enhance both students' and nurses' knowledge of and attentiveness to the characteristics and stereotypes that influence emotional communication with older people: recognizing older people's unique needs and differences and making communication and care more person centered. These results can further help illuminate and identify the challenges and complexities of emotional communication and its impacts on home care and health outcomes.
---
Tabels Table 1 Sample description of emotional communication during the home care visits Table 2 Sample description of participants of the home care visits
Höglander, J., Sundler, A.J., Spreeuwenberg, P., Holmström, I.K., Eide, H., Dulmen, S. van
---
CONFLICT OF INTEREST
The authors declare no potential conflict of interests.
---
AUTHOR CONTRIBUTIONS
Study design: J.H., A.J.S., P.S., I.K.H., H.E., S.D., and J.H.E. Data collection: J.H., A.J.S., J.H.E., and P.S. Data analysis: J.H., A.J.S., J.H.E., and P.S. Manuscript writing and revisions for important intellectual content: J.H., A.J.S., P.S., I.K.H., H.E., S.D., and J.H.E. ORCID Jessica Höglander https://orcid.org/0000-0002-5685-8669 Annelie J. Sundler https://orcid.org/0000-0002-9194-3244 Inger K. Holmström https://orcid.org/0000-0002-4302-5529 Höglander, J., Sundler, A.J., Spreeuwenberg, P., Holmström, I.K., Eide, H., Dulmen, S. van, Eklund, J.H. Emotional communication with older people: A cross-sectional study of home care. Nursing & Health Sciences: 2019, 21(3), p. 382-389 __________________________________________________________________________________ This is a Nivel certified Post Print, more info at nivel.nl |
This study addresses the question of how learners whose parents have a migration background can be supported in upper secondary education to prevent their dropping out of education. To that end, we conducted interventions in an upper secondary education setting in order to improve school grades, subject-specific self-conceptions of ability in mathematics and German, motivation to study, and perceived self-efficacy and we evaluated the effects on learner achievements. We applied a two-phase process: a more virtual approach during restrictions imposed during COVID-19 and a more face-to-face approach in which learners were tutored by teachers. The intervention showed an improvement in grades in German and in the self-conception of ability in mathematics. However, this was only established during the face-to-face intervention phase. During the COVID-19 phase, and thus when there was no possibility of standardized intervention, no specific effects were observed. | Introduction
At the upper secondary level, unevenly distributed participation in education is related to more than language barriers. It is co-determined by the sociostructural characteristics of origin (Düggeli et al., 2015;Maaz et al., 2008;Scharf et al., 2020;SKBF, 2018;Verhoeven, 2011). These characteristics can create educational obstacles for learners, as has been particularly welldocumented in crossover research (see, for example, Becker et al., 2013). Even if young adults succeed in entering higher qualifying training, the problems have often not been overcome. In most cases, the problems continue to exist in the challenge not to immediately drop out of the training after successfully starting it. The stress of the training situation for these learners is often only relieved if they have a degree certificate that opens access to a working life and thus creates a good starting point for their further professional biographical development (Hupka-Brunner & Meyer, 2021;OECD, 2022a).
---
Problem setting and questions
In order for national education systems to be informed about the extent to which younger generations can integrate into society by acquiring occupational certifications, many countries report figures on participation and graduation rates at this level at regular intervals. For example, according to the OECD (2022b), the average participation rate in Europe in 2019 was 84% (15-to 19-year-olds). The rate is 80% in Germany and around 88% in Switzerland. The number of graduations in the same age group is slightly lower: the OECD average is about 80%, in Germany it is about 73%, and in Switzerland about 84%. These figures may vary depending on the age group studied. For example, in Switzerland, some apprentices have not yet completed their education at 19; when considering all learners who have completed upper secondary education by the age of 25, the rate in Switzerland is about 90% (FSO, 2021). Thus, at least in Switzerland, a large number of young adults seem to be able to achieve a degree in upper secondary education by the age of 25. Including gender and family characteristics in the analyses, a heterogeneous picture emerges, especially with regard to higher-level qualifications (ISCED 35) (UNESCO Institute for Statistics, & Eurostat, 2012). For men, the overall graduation rate for higher-level qualifications is 34%; for women, the rate is around 44% (Gaillard & Babel, 2018). It is also apparent that learners born in Switzerland to Swiss parents achieve a 20% higher graduation rate from higher education courses than learners whose parents are not Swiss, regardless of whether the latter were born in Switzerland (Gaillard & Babel, 2018). Such inequality distributions are not new, and they have long been discussed in relation to questions of justice theory (Blossfeld, 2013;Dumont & Ready, 2020;Heinrich, 2010). The basic premise of modern education systems with a Western influence has been critically questioned. It has been explicitly stated that no one in Switzerland should be hindered or excluded from participation in education on the basis of their characteristics of origin. Violations of this assured right to participate centrally affect the educational biographical developments of adolescents. However, misadministration also affects the economic system, which relies on young people who are as well-educated as possible. Both the individual and the socio-economic areas concern fundamental issues of common and fair participation in civil society. These hold together, as social pillars, the collective togetherness (Putnam, 2015;Sassen, 2014). If damage can be identified in these areas -that is, if inequalities are found that can be identified as injustices -compensatory measures are necessary for those who have been disadvantaged (Becker & Schoch, 2018;Esser & Seuring, 2020). Moreover, these measures must be maintained until the causes of these injustices are eliminated. This paper is a first step in this regard. The focus is on the conceptualization and implementation of corrective regulatory support for learners whose training risks are increased. This support must be effective during transitions and throughout the entire training period at the relevant levels of education.
This study concerns support throughout the training period. The focus is on young people with a parental migration background who have completed higher-qualifying education at the upper secondary level. However, if their grades at this level of education are not sufficient, or if their self-conception of their abilities is unstable and their perceived self-efficacy at school and motivation to work are weak, the probability that they will be able to complete their education decreases. To counter this situation, an intervention was carried out in a higher-qualifying training course in Switzerland. The aim was to support committed and motivated learners so that they could complete the training. The initial question of this study begins at this point. The question is: To what extent does support-oriented intervention succeed in positively influencing the development of learners' grades, self-conception of subject-specific skills, general perceived self-efficacy at school, and motivation to work?
---
The intervention
The intervention proposed to achieve the objectives was open to learners with a parental migration background. The learners had to be committed and willing to attend an additional weekly learning session. The intervention was located at a business school in Switzerland. This is a higher-qualifying full-time upper secondary level vocational school that prepares learners for a qualified vocational qualification. At the same time, this training gives them the option of attaining the Federal Vocational Baccalaureate. The intervention is presented below in conformity with its structural framework. The embedded contents are discussed. Finally, the characteristics to which the intervention was directed are reported. An attempt is made on the basis of these steps to represent the intervention effect.
---
Structural Phase I
The first phase of the intervention lasted from January 2020 to December 2020, a period understandably referred to as the COVID-19 setting. The COVID-19 setting was characterized by distance teaching and distance learning. Formally, meetings during this phase can be described as ad-hoc online meetings. We refrained from imposing an obligation to participate. Nevertheless, attempts were made to meet with the young people regularly electronically, largely through individual exchanges (see Figure 1).
---
Structural Phase II
During the second phase, which lasted from the beginning of 2021 until the end of June 2021, the structural teaching conditions normalized; this phase can thus be referred to as a "normal setting" in terms of the intervention. It was possible to work with the learners as planned, weekly and face-to-face. Participants were expected to participate regularly in person (see Figure 1).
---
Content Phase I (January 2020 to December 2020)
During the first phase, an attempt was made to actively approach the learners in the intervention group and to respond to their difficulties and questions arising from the situation. Looking back, the focus was not only on school learning problems but, in some cases, also on questions about the challenges of shaping life in general. These questions could be explained more and more in terms of the learner's connection to their home. Attempts were also made to advance the adolescents in their learning, differentiated by questions about their learning organization. Particular care was taken to clearly see the progress they had made and to attribute the causes to their own abilities wherever possible. In general, the content of the first phase was not very systematic. However, this phase is discussed here, because it allows at least the development of a sense of how the features discussed here changed under the condition of highly dynamic school realities (see Figure 1).
---
Content Phase II (January 2021 to June 2021)
At the beginning of 2021, the teaching situation changed again to a more structured level. At that time, it became possible to implement the systematically organized intervention units as planned. It was possible to work with the intervention participants, as planned, for three hours a week. Teachers in German, mathematics, English, economics, and law were available to them. Thus, those subjects that are particularly lucrative in the training plan were prioritized. Work was focused on problems that students brought with them. Basically, the intervention was designed as a learning setting in which the learners largely self-directedly advanced their tasks on the basis of unresolved questions and upcoming content-related problems or deficiencies. This usually included the areas of task aids and upcoming tests. Stabilizing the selfassessment of various abilities was also an important part of the intervention. This was attempted by identifying strengths during individual support that were then made visible by the supporting teachers as learning successes. Work organization issues were also addressed, and learners were supported in this regard. The learners were thus given individually supervised learning time as well as the opportunity to design the time available to them together This also made aspects of social-emotional learning visible (see also Dueggeli et al., 2021) (see Figure 1).
---
Dimensions of impact
Based on the structural and content-related frameworks and on the background of the intervention objectives, three impact dimensions were examined: first, the grades in the subjects of German and mathematics; second, the learner's subject-specific self-concept of their own ability in these subjects; and, third, two motivational aspects of learning: general perceived self-efficacy at school and the motivation to work. With respect to the grades, the intervention focused on the area that centrally decided on whether to stay in training. They were therefore the focus of the intervention as a performative criterion. Cognitive and motivational processes are linked to grades (see OECD, 2016) and are promoted in an interventional manner here as target dimensions and recorded as dimensions of impact.
---
Hypotheses, design, instruments, and sample
The initial question is differentiated in relation to the two intervention phases into the following hypotheses:
• Hypothesis: Intervention Phase I: The grades in the subjects of German and mathematics, as well as the associated subject-differentiated selfconception of abilities, the perceived self-efficacy at school, and the motivation to work change to the same extent in the intervention group as in the reference groups 1 . • Hypothesis: Intervention Phase II: The grades in German and mathematics in the intervention group increase more strongly than in the reference groups. The self-conception of ability in mathematics and German increases more in the intervention group than in either reference group.
Perceived self-efficacy at school and motivation to work also increase in the intervention group as compared to the two reference groups.
1 Note: No substantive work was possible (due to COVID) during this phase of intervention. The corresponding variables could not be systematically worked on, so no effects are expected. Specifically, H0 cannot be rejected.
---
Design
As the reported basic model shows, the first phase of intervention work started at the end of February 2020. It preceded the T0 measurement in January 2020. The time span between the T0 measurement and the start of the intervention was used to identify those young people for whom the intervention was designed. Young people could be admitted to the program only if their parents had a migration status. By default, the intervention was designed for a weekly work unit of 3 hours (see the section Intervention). In addition to process-accompanying qualitative evaluation formats, which were carried out monthly with the learners and once per term with the teachers, six-month quantitative impact tests took place in a quasiexperimental intervention control group design with four repetitions of the measurements. The first three measuring dates have been fully evaluated and are included in the present study (T0 January 2020; T1 January 2021; T2 June 2021).
---
Sample
The intervention was carried out with students from the 2019-2022 cohort at an upper secondary-level business school. This school-based organized vocational training leads either to a certificate of professional competence or to a vocational qualification. It thus enables a higher-qualifying grade at the upper secondary level (ISCED 35). Learners were offered the intervention based on possible parental migration status. Regular participation and personal commitment were required. This condition was met by 14 young people, who formed the intervention group. Two reference groups were also formed. The first (reference group I) comprised 26 young people. They could have joined the intervention group because of their parental migration status, but they decided not to participate. The second reference group (reference group II) consisted of 13 learners. Their parents had no migration status. They were therefore not eligible for intervention (see Table 1). The average age was comparable in all groups: 17.64 years (intervention group), 17.12 years (reference group I) and 17.31 years (reference group II). In the intervention group, 7 participants (50%) were male and 7 (50%) were female. In reference group I, 16 learners (61.5%) were male and 10 (38.5%) were female. In reference group II, 9 learners were male (69.2%) and 4 (30.8%) were female. After selecting the learners, it was recorded whether the learners spoke German at home. In the intervention group, 4 adolescents (28.6%) spoke German and 10 adolescents (71.4%) did not. In reference group I, 12 young people (46.2%) spoke German at home and 14 (53.8%) did not. In reference group II, whose parents had no migration status and who were therefore not eligible for the intervention, 10 learners (76.9%) spoke German at home and 3 learners (23.1%) stated that they did not speak German at home. The proportion of learners who speak German at home therefore sees an increasing trend from the intervention group to reference group I up to reference group II (see Table 1).
---
Instruments
The grades of the students in the subjects of German and mathematics were recorded. The subjectspecific self-conceptions of ability in mathematics and German as well as motivation to work and general perceived self-efficacy at school were also gathered (see Table 2).
---
Evaluation methodolog y
To test the hypotheses, a Kruskal-Wallis test for independent samples and corresponding post-hoc comparisons with the change values (T0-T1 and T1-T2) were calculated. A non-parametric approach was chosen because the change values of some variables could not be assumed from normally distributed data (grades in mathematics/ German; self-concept ions in mathematics/German). There were inhomogeneous group variants (grades in mathematics and selfefficacy), and no interval scaling of the values could be assumed (see grades in mathematics and German). In addition, due to the risk of distortions due to outliers in the change values and the rather small group sizes, the less pre-supposed procedure was chosen.
---
Results
---
Intervention Phase 1 (T0-T1)
During the first intervention phase, the so-called COVID-19 phase, no significant differences between the groups were found in the values of the analyzed characteristics. However, the German and mathematics grades tended to show a slight decline in all three groups. The change in the characteristics of perceived self-efficacy or motivation to work was similar.
Here, too, a descriptive decrease can be observed in all three groups. With regard to the subject-specific self-conceptions of ability in mathematics and German, it can be stated, again descriptively, that in the area of German, there was a slight increase in all three groups. In the area of mathematics, it was reduced in the intervention group, while the values in the two reference groups increased. However, these mean value differences cannot be statistically assured as group differences.
Looking at the intervention group specifically in comparison with the two reference groups, the following trends are shown, again descriptively: For grades, the values in the intervention group were reduced to a greater extent than in the two reference groups. In the self-conception of ability in German, the increase for the intervention group was greater than in the reference groups; in the self-conception of ability in mathematics, there was a slight decrease in the intervention group and a slight increase in the reference groups. In terms of motivation to work, the decrease in the value for the intervention group was somewhat less than in the two reference groups (see Table 3).
---
Intervention Phase 2 (T1-T2)
During the second intervention phase, there was a statistically significant change in the German grades and in the subject-specific self-conception of ability in mathematics (see Table 4). Downstream individual group comparisons show that both effects were due to the differences between the intervention group and reference group II (the group without intervention and without parental migration). This means, first, that a positive change in the German grades of young people from families with a migration background was offset by a decrease in the German grades in the group without intervention and without parental migration (see Table 4). Second, with regard to the subject-specific self-conception of ability in mathematics, the decreasing value for reference group II was faced with an increasing value among young people with a parental migration background (intervention group) (see Table 4). All other characteristics showed statistically insecure trends in change. The mathematics grades tended to decrease in the reference groups and increase in the intervention group. The self-conception of ability in German increased somewhat in the intervention group and in reference group II, while it decreased slightly in reference group I. Perceived self-efficacy tended to increase in all three groups. In terms of motivation to work, a decreasing trend can be seen in the intervention group and in reference group I. In reference group II, it rose somewhat during this phase (see Table 4).
---
Discussion
The positive development of the German grade during the second intervention phase was a central result of this study. Learners whose parents had a migration background and whose language at home was less frequently German made greater progress than learners without a migration background who more often spoke German at home. This finding indicates an optimistic direction.
With the effect in German, the positive change affected an area that is highly significant for general school development. If German grades improve for learners who, due to migration, are at increased risk of not completing their training, the basis for other subjects taught in the local language of instruction will also be stabilized. The second central finding is the positive change in the subject-specific self-conception of ability in mathematics. This change concerned the same two groups: it was again the learners of the intervention group who changed positively compared to the change in reference group II.
The attempts to positively influence the development of grades and, in parallel to this, to stabilize young people in their self-assessment with regard to their ability in subjects, seem to have had a desirable effect here, at least to some extent. However, the analyses of the qualitative data of this study will show exactly how internal inter-relationships are to be understood. This will stabilize the basis somewhat in order to be able to further develop the structure and implementation of the intervention in a differentiating manner.
These two effects cannot hide the fact that the analyses leave central questions unanswered. For example, further thought should be given to how the effectiveness of the intervention could be broadened and thus extended to other characteristics. In addition, further analysis is needed to address the question of why the developments between the intervention group and reference group I are not more different. In concrete terms, this means trying to discuss the extent to which the proportion of young people who speak German at home may play a role here. This proportion was higher in reference group I than in the intervention group. In general, this could mean that the intervention had an effect primarily on the young people with a parental migration background who did not speak German at home.
The fact that the intervention also produced stronger effects during the second phase could indicate that supportive funding at the upper secondary level should be coupled with an obligation to participate regularly in face-toface formats. If participatory and self-regulated forms of learning are to be sought, which must also be the responsibility of the learners themselves, a formal obligation to participate regularly seems to be a prerequisite for learning and training success. Without structuring framework requirements, learners have to create formal learning structures themselves. This is undoubtedly important. However, it takes away the time and attention they need for learning specific subject matter. We saw this clearly during the first phase of intervention, which was not very systematically structured. It was necessary to clarify questions about the structuring of the day in general with the young people before addressing the subject matter. Moreover, in light of the developments during the first phase of the project, this topic may need to be considered in general at the upper secondary level. In educational terms, the findings indicate that young people are empowered in their responsibility to regulate and shape their own learning in more open learning formats, which can include distance formats. In this context, the development of the self-conception of ability seems to be of particular importance.
However, all the findings reported here must not give the impression that this offer creates educational justice. As the study was implemented, its main concern was to ensure that the negative effects of educational inequality not become even more pronounced. However, the basic lever for mitigating this inequality cannot be exclusively compensatory individual support. It must start at the level of educational structure at the same time. The course must be set here so that structural risk factors for educational inequality can also be eliminated at the upper secondary level and beyond. That is not easy. And if it means taking specific counter-measures, especially with programs such as this one, then that is what must be done. It is necessary to structurally anchor new approaches to knowledge, as may emerge from the study presented here, in compulsory compensation channels. Perhaps this is not particularly fair, as some have to give more time and commitment to their education at the upper secondary level because of their characteristics of origin than others without these risk factors. However, protecting individuals from a situation in which they are released into the labor market without a degree seems to be a primary objective, and one that does not prevent them from undertaking their professional development with as much freedom as possible. This is a professional biographical life-design justice that should be further developed situationally and prospectively, as well as structurally. |
hosted a symposium in October 2010 focused on sex work and sexually transmitted infections in Asia, engaging a biosocial approach to promote sexual health in this region. Asia has an estimated 151 million cases of curable sexually transmitted infections (STIs; eg, syphilis, gonorrhea, chlamydia) each year, with commercial sex interactions playing a large role in ongoing transmission. Substantial human movement and migration, gender inequalities, and incipient medical and legal systems in many states stymie effective STI control in Asia. The articles in this supplement provide theoretical and empirical pathways to improving the sexual health of those who sell and purchase commercial sex in Asia. The unintended health consequences of various forms of regulating commercial sex are also reviewed, emphasizing the need to carefully consider the medical and public health consequences of new and existing policies and laws. |
Unsafe sex is the second most important risk factor for morbidity and mortality in low-income areas [1]. The World Health Organization estimates that there are .150 million new curable sexually transmitted infections in South and Southeast Asia each year [2]. But traditional medical and public health approaches to sustainably change sexual behavior have been wrought with failure. Moreover, the process of framing these public health issues has often been charged with assumptions about sex work, distancing sex workers from important resources and complicating effective research programs. Although there is international consensus about the importance of sex worker human immunodeficiency virus (HIV) and sexually transmitted infection (STI) medical programs, there are no best practices for social responses (legal, political, economic) on behalf of vulnerable sex workers, and not all sex workers are vulnerable to HIV infection. The social response of each Asian nation to its sex industry intimately depends on culture, normative structures, and legal boundaries. Implicit in these notions of sex work are value judgments and moral assessments that extend well beyond the traditional framework of biomedicine charged with organizing STI/HIV control measures. A broad spectrum of social responses to sex work has emerged in Asiadsome nations arrest or detain prostitutes in top-down mobilized state responses, connecting commercial sex to human trafficking and other transnational criminal activity. Other Asian nations foster grassroots NGO efforts to empower sex workers in client negotiations and reproductive health choices, using empowerment approaches. The consequences of sex worker regulation for the spread of HIV/STIs are unclear, but empirical data from our multidisciplinary working group helps to inform these critical policy positions. A better understanding of the social context shaping commercial sex policy in Asian states can facilitate implementation and roll-out of HIV/STI social policy and public health programs in Asia and beyond.
This interdisciplinary workshop brought together specialists across various fields to examine the commercial sex enterprise in Asia. Emphasis was placed on a biosocial framework to integrate historical, social, political, legal, economic, biological, and public health perspectives. This is not a matter of simple translation across disciplines, but rather a serious consideration of the contribution of many different factors not as separate influences but as mutually constitutive and inherently intertwined parts of a complex whole event. In order to illuminate new aspects of sex work and the relationship to STIs, this conference addressed the following major conundrums: How can traditional dichotomous theoretical frameworks of sex work that rely on simplification of sex workers to either fully autonomous empowered individuals or nonautonomous victims be reframed? How have nongovernmental organizations (NGOs) and the spread of civil society in many parts of Asia changed the potential for sex workers to organize HIV/STI prevention? How does sex worker agency measured individually or collectively influence sexual risk taking? What are the implications of transnational sex trafficking and sex work for inter-Asian state relationships and collaborative medical and public health responses?
The collection begins with a piece by Dr Joseph D. Tucker at the Harvard School of Medicine and Dean Astrid Tuminez at the National University of Singapore [3]. Their article highlights the importance of choosing an appropriate conceptual framework when analyzing sexual health. Much of the research on sex work in Asia has focused on using an empowerment approach, although there have been increasing efforts to use an abolitionist framework to understand commercial sex. A new behavioralstructural conceptual framework is described, with implications for clinicians, policymakers, and public health practitioners. They point out that using such a behavioral-structural framework can incorporate some elements of both prevailing conceptual frameworks, advancing our knowledge of how sex work becomes unsafe and what structural factors can be changed to attenuate sexual risk among sex workers in Asia and beyond.
One example of how the social environment of sex workers increases sexual risk comes from India where violence and mesolevel environmental factors are critically linked. Annie George, Shagun Sabarwal, and P. Martin from the International Center for Research on Women (Hyderabad, India) investigate the organization of sex work on exposure to physical and sexual violence [4]. They report an alarmingly high prevalence of violence among all female sex workers, with a 3-fold increased risk of physical violence and a 2-fold increased risk of sexual violence among women engaged in contract work. This group's research reveals the connection between sex worker terms of working and risk of violence, with important policy implications.
In their article, Jennifer T. Erausquin, Elizabeth Reed, and Kim Blankenship of Duke and American Universities investigate the relationship between self-reported sexual risk and interactions with police among 850 Indian female sex workers [5]. Although there has been much speculation regarding the effect of punitive police measures on sexual risk, this article provides an empirical analysis of this relationship in the context of Avahan, a Bill and Melinda Gates Foundation-supported AIDS initiative. They find that a number of dimensions of police maltreatment of sex workers are associated with increased sexual risk. Their findings highlight the importance of including police and others who implement local policy in the process of designing and sustaining effective sexual health programs.
Continuing with the theme of violence among female sex workers, Jay Silverman and colleagues examine coercion among HIV-infected female sex workers in Mumbai, India [6]. Of their sample of 211 women, 41.7% were trafficked into sex work. Coercion into sex work is associated with increased exposure to violence, poor condom use, higher number of clients per day, and greater alcohol use. Coercion and agency play a key role in mediating sexual risk, with forced sex work playing a role in expanding STI/HIV transmission. Better understanding of the terms and context of sex work can help to promote sexual health interventions.
In their article, Suiming Pan and Yingying Huang of the Institute of Sexuality and Gender at the People's University (Beijing, China) and William Parish of the University of Chicago examine the clients of female sex workers in China [7]. Drawing on a 2006 population-based representative sample of adult men, they find that 5.6% of urban men reported visiting a female sex worker in the past year. This percentage was similar to that found in 2000, suggesting that China's punitive anti-prostitution campaigns have not substantially transformed the STI epidemics. The bulk of self-reported STIs were associated with unprotected commercial sex, consistent with earlier populationrepresentative studies. Interestingly, they did not find that young, migrant men have an increased risk of unsafe sexual behaviors. Instead, higher-income businessmen report having more unprotected commercial sex. This analysis serves as a useful reminder of the importance of male determinants in promoting STI spread in the Chinese context.
In her article, Joan Kaufman of Harvard and Brandeis Universities examines the influence of civil society on improving the effectiveness of STI prevention among sex workers [8]. Criminalization of sex work in several Asian contexts pushes marginalized female sex workers farther away from the government outreach workers officially charged with STI/HIV prevention. In many Asian states, a weak civil society and poorly coordinated NGOs stall comprehensive sexual health programs. She argues that a labor rights-based approach to community-based STI/HIV prevention is the most likely to succeed, highlighting the recent success of Sonagachi in India and other peer-organized NGOs that represent sex workers' interests and needs. Greater cooperation between government and NGOs and more NGO-led responses are key parts of effective STI/HIV responses.
The goal of this supplement is to provide an evidence base to further our understanding of how sex worker and client health can be promoted in Asia. There are many challenges in designing sexual health programs focused on sustainably decreasing unsafe commercial sex. High rates of migration within and across borders, limited sex worker NGOs, and lack of agency among sex workers create structural barriers for STI/HIV prevention programs. While many innovative programs for sexual health have been implemented in Asia, few have been comprehensively evaluated. Further interdisciplinary research is needed to understand the context and outcomes of such programs. A broader evidence base could help inform programmatic and policy efforts focused on STI/HIV control in this critical region.
---
Notes
|
Background: Female sex workers (FSWs) are one of the most-at-risk population groups for human immunodeficiency virus (HIV) infection. This paper aims at identifying the main predictors of HIV infection among FSW recruited in the 2nd Biological and Behavioral Surveillance Survey in 12 Brazilian cities in 2016. Method: Data were collected on 4245 FSW recruited by respondent driven sampling (RDS). Weights were inversely proportional to participants' network sizes. To establish the correlates of HIV infection, we used logistic regression models taking into account the dependence of observations resultant from the recruitment chains. The analysis included socio-demographic sex work characteristics, sexual behavior, history of violence, alcohol and drug use, utilization of health services, and occurrence of other sexually transmitted infections (STIs). Results: HIV prevalence was estimated as 5.3% (4.4%-6.2%). The odds ratio (OR) of an HIV-positive recruiter choosing an HIVpositive participant was 3.9 times higher than that of an HIV-negative recruiter (P < .001). Regarding socio-demographic and sex work characteristics, low educational level, street as the main work venue, low price per sexual encounter, and longer exposure time as a sex worker were found to be associated with HIV infection, even after controlling for the homophily effect. The OR of being HIV infected among FSW who had been exposed to sexual violence at least once in a lifetime (OR = 1.5, P = .028) and the use of illicit drugs at least once a week were highly significant as well, particularly for frequent crack use (OR = 3.6, P < .001). Among the sexual behavior indicators, not using condoms in some circumstances were significantly associated with HIV infection (OR = 1.8, P = .016). Regarding the occurrence of other STI, the odds of being HIV infected was significantly higher among FSW with a reactive treponemal test for syphilis (OR = 4.6, P < .001).The main factors associated with HIV infection identified in our study characterize a specific type of street-based sex work in Brazil and provided valuable information for developing interventions. However, there is a further need of addressing social and contextual factors, including illicit drug use, violence, exploitation, as well as stigma and discrimination, which can influence sexual behavior. | Introduction
Since the beginning of the acquired immune deficiency syndrome (AIDS) epidemic, female sex workers (FSWs) have been nationally and internationally recognized as a population at high risk for acquiring human immunodeficiency virus (HIV) infection. [1][2][3][4][5][6] Worldwide studies point to the high levels of HIV prevalence among FSWs compared to the general population. In African countries, there is emerging data showing that FSW carry a disproportionate burden of HIV even in generalized epidemics. [7,8] Studies in Asia [9][10][11] as well as in developed countries also report higher rates among FSW. [12,13] A systematic review of HIV prevalence studies among key populations in Latin America and the Caribbean estimated a median HIV prevalence among FSW of 2.6%, [14] while the estimated prevalence in the adult population was 0.5%. [15] In Brazil, a study carried out in 2000 to 2001 in some capital cities estimated a prevalence of 6.1% among 2712 FSW, [2] a rate of about 15 times higher when compared with that of the Brazilian female population aged 15 to 49 years. [16] FSWs are considered a high-risk group for acquiring HIV infection [3,17] due to their social vulnerability and factors associated with their work such as multiple sex partners, inconsistent condom use, or coinfection with other sexually transmitted infections (STIs). [18] Studies show that HIV infection is associated with socio-demographic and commercial sex work characteristics, [19][20][21][22][23] such as age and schooling, time span of sex work, place of work, price of commercial sex, and use of drugs, [24][25][26] which, in turn, is associated with unprotected sex. [27] Findings from a snowball survey carried out in Santos, São Paulo, showed that the use of illicit drugs, especially crack, was one of the main factors associated with HIV infection. [4] Furthermore, structural issues such as stigma and discrimination act as important barriers and hinder access to and use of health services. [28,29] The burden of HIV, syphilis, and other STI urged researchers to conduct studies in Brazil among FSW. [30][31][32] Furthermore, in Brazil concentrated HIV epidemic, [33] small interventions in this vulnerable group can significantly decrease HIV incidence in the general population. [34] Thus, monitoring factors associated with HIV infection is important not only to support interventions focused on this population group, but also to reduce the spread of HIV infection among clients of FSW, which constitute a bridge population for STI/HIV transmission into the Brazilian population. [35] In general, Brazilian studies conducted among FSW until the mid-2000s used convenience samples, making it difficult to estimate parameters for monitoring the HIV/AIDS epidemic in this population group at the national level. [36] In 2009, an HIV biological and behavioral surveillance survey (BBSS) carried out in 10 Brazilian cities was the first study to use a probabilistic sampling methodrespondent driven sampling -(RDS) for the recruitment of FSW. [37] For the analysis of data collected by RDS, a statistical method has been proposed for the estimation of HIV prevalence and its variance, taking into account the dependence of observations resultant from the recruitment pattern. [38] This approach was extended to other statistical analyzes, such as measures of association and multivariate models. [39] In 2016, a 2nd HIV BBSS among FSW was carried out in 12 Brazilian cities, aiming at monitoring STI and risky practices among FSW. Based on improvements in data analysis techniques, [39] the aim of this study was to identify factors associated with HIV infection using logistic regression models.
---
Methods
This study is part of the 2nd BBSS, a cross-sectional RDS survey among 4328 FSW collected in 12 Brazilian cities from July to November 2016. The BBSS was designed to estimate the prevalence of HIV, syphilis, and hepatitis B and C and to evaluate knowledge, attitudes, and practices related to HIV infection and other STIs among FSW. The research project was approved by the Ethics Committee of the Oswaldo Cruz Foundation (Protocol 1.338.989).
Twelve Brazilian cities were a priori chosen by the Department of STI/AIDS and Viral Hepatitis, Ministry of Health, according to both, geographical criteria and their epidemiologic relevance in the HIV/AIDS epidemic in the country. The sample size was set at 350 FSW in each city. Figure 1 shows the 12 cities considered in the study and their correspondent sample sizes.
Women were eligible to participate in the study if they met the following inclusion criteria: age 18 years old or over; to report working as a sex worker in one of the cities of the study; to have had at least one sexual intercourse in exchange for money in the past four months; and to present a valid coupon to participate.
Fieldwork was conducted in health services located in the 12 cities. For each city, 6 to 8 initial participants, herein referred to as "seeds," were chosen purposively, following previous formative research. Seeds were well-connected FSW in their community who reported large social networks. To provide diversity of recruited FSW, seeds were chosen with different characteristics (age group, color/race, socioeconomic class, education, and work venue). Each seed received 3 coupons to distribute to other sex workers from her social network. Recruits of the seeds in the survey were considered the first wave of the study. After participating in the interview, each participant received 3 additional coupons to distribute to their peers and this process was repeated until the sample size was achieved in each city.
The RDS method also draws on the strategy of giving incentives to the participants. A 1st incentive, that is, primary incentive, is given to participants when they complete their participation in the study. Thereafter, a 2nd incentive, that is, secondary incentive, is given to participants for each peer successfully recruited into the study. In this study, the primary incentive was a gift (makeup products), payment for lunch, and transportation in addition to a reimbursement for their time lost from work (approximately US $15.00). The secondary incentive was a payment of US$10.00 for each recruited person who participated in the study. The choice of sites, in general a health service, for data collection and the level of incentives were established according to the formative research carried out in each city before the RDS survey.
The questionnaire included modules on: socio-demographic characteristics and information related to commercial sex activity, knowledge about HIV and other STI transmission, sexual behavior, history of HIV and syphilis testing, STI history, use of alcohol and illicit drugs, access to prevention activities, access to and utilization of health services, discrimination, and violence. The questionnaire was designed for tablets and could be self-administered according to the participant's desire and readiness.
Tests for HIV, syphilis, and hepatitis B and C were conducted by standard rapid tests using peripheral venous blood collection, according to protocols recommended by the Brazilian Ministry of Health. All tests occurred before the interview and all participants received pre-and posttest counseling. Participants who tested positive for any of the rapid tests had their blood samples taken for confirmatory laboratory testing and received additional posttest counseling, both for psychological impact and to encourage partner notification, and were referred to public health systems for follow-up.
Screening for HIV, hepatitis B virus (HBV), hepatitis C virus (HCV), and syphilis antibodies used the following assays: HIV (HIV Test Bioeasy, Standard Diagnostic Inc, Korea and ABON HIV 1/2/O Tri-Line Human Immunodeficiency Virus Rapid Test Device, China), HBV (Vikia HBsAg, BioMérieux SA, France), HCV (ALERE HCV, Standard Diagnostic Inc, Korea), and syphilis, treponemal assay (SD BIOLINE Syphilis 3.0, Standard Diagnostic Inc, Korea). A reactive result on the initial HIV rapid test was followed by a 2nd HIV rapid test, from a different manufacturer and samples reactive on rapid tests were further submitted to confirmatory assays.
---
Data analysis
The proposed weighting for data collected by RDS is proportional to the inverse of network size of each participant. [40] In this study, the question used to measure the network size of each participant was: "How many sex workers who work here in this city do you know personally?" Each one of the 12 cities composed a stratum and, in each one, the weighting was inversely proportional to the size of the network totaling the size of the stratum.
The tendency of a participant to recruit peers with similar characteristics is usually referred to as homophily. [41] To take into account this bias in the recruitment pattern and a potential overrepresentation of individuals with certain characteristics in the study population, we used logistic regression models to estimate factors associated with HIV infection according to a method proposed by Szwarcwald et al. [38] For each participant, the result of the recruiter's HIV test was taken into account to control for the homophily effect. Additionally, the logistic regression models were performed by taking into account the complex sample design, by considering each city as a stratum and the participants recruited by the same FSW as a cluster. [38] The following variables were included in the analysis: sociodemographic variables (age, educational level, and race/color); characteristics related to sex work (workplace, time as FSW, and price of each sexual encounter); prevention activities (affiliated to/or participated in a non-governmental organization [NGO] to FSW rights [FSW-NGO], STI counseling); sexual behavior (not using condoms in some circumstancesknows the client, in much need of money, client's requirement, not having condom available at the time of the sexual encounter, other); consistent condom use with clients in vaginal sex); history of physical and sexual violence; alcohol and drug use (unprotected sex due to alcohol or drug use at least once a week, crack or cocaine use at least once a week); utilization of health services (Pap smear and HIV testing in the previous 24 months before the survey); and STI (self-referred occurrence of lesions, blisters or warts on the vagina or anus in the previous 12 months, and reactive treponemal antibody test for syphilis).
---
Results
HIV prevalence was estimated as 5.3% with a 95% confidence interval (CI) (4.4%-6.2%). The odds ratio (OR) of an HIVpositive recruiter choosing an HIV-positive participant was nearly 4 times higher than that of an HIV-negative recruiter (Table 1). Taking into account the homophily effect and the dependence between recruiters and their recruited participants, the design effect was estimated at 1.76. [38] In the logistic regression analyses presented in Table 1, many of the studied variables were significantly associated with HIV infection, even after controlling for HIV recruiter's result. Regarding socio-demographic characteristics, the older the FSW the higher the HIV prevalence; and the lower the educational level the higher the odds of HIV infection. However, no statistically significant difference was estimated for skin color/ race. Regarding commercial sex characteristics, HIV infection was associated with time in commercial sex work: HIV prevalence ranged from 1.9% for less than 5 years to 11.9% for greater or equal to 20 years of sex work (OR = 6.7, P < .001). Sex work venue was also significantly associated with HIV infection, with an OR of 3.4 (P < .001) when point of street is compared to other workplaces. Additionally, an inverse association was found for the price of each sexual encounter, the higher the price the smaller the odds of HIV infection. In relation to participation in prevention activities, women who were affiliated to or participated in an FSW-NGO in the past 6 months had 1.7 times greater chance of being HIV infected. STI counseling in the last 6 months prior to the survey was not statistically significant.
Results related to history of violence, use of alcohol and illicit drugs and sexual behavior are presented in Table 2. The odds of being HIV infected among FSW who had been exposed to sexual violence at least once in a lifetime was significantly higher (OR = 1.5, P = .028). As to alcohol use, only the indicator unprotected sex under the effect of alcohol or drug use at least once a week showed a statistically significant association with HIV infection (OR = 2.0, P = .010). On the other hand, the use of illicit drugs at least once a week was highly significant: HIV prevalence varied from 4.5% to 10.6%, OR = 2.5 (P < .001), with a marked effect for frequent crack use (OR = 3.6, P < .001). Although HIV prevalence was smaller for consistent condom use with clients, the OR was not statistically significant. However, the situation of not using condom for not having one available at the time of the sexual encounter was significantly associated with HIV infection (OR = 1.8, P = .016). Other circumstances for not using condom such as "many sexual encounters during the day," "allergy to condom," "unconsciousness due to use of alcohol or drugs," or "any other motive" showed borderline associations as well.
Among the indicators of health service utilization, neither uptake of the Pap smear exam nor HIV testing in the previous 24 months before the survey showed a significant effect on HIV infection, although prevalence estimates were smaller among FSW who used health services (Table 3). Regarding the occurrence of STI signs over the 12 months prior to the survey, presence of blisters on the vagina or anus indicated a chance 2.6 times higher of HIV infection when compared to those who did not report STI signs. The OR was highly significant among those FSW who had been exposed to syphilis (OR = 4.6, P < .001).
In Table 4, we present the results of the multivariate analysis. Educational level remained statistically significant, highlighting the stronger effect of illiteracy or very low level of education, as well as price per sexual encounter, time of exposure to sex work, and the workplace (street vs others) after controlling for all other variables that also showed significant effects on HIV infection. Among the indicators of alcohol and illicit drug use, only the use of crack showed an adjusted significant OR (OR = 1.8, P = .027). Syphilis (reactive treponemal test) was the most important predictor of HIV infection, with corresponding adjusted OR of 2.7 (P < .001).
---
Discussion
In the first BSSS among FSW recruited by RDS in 2009 in 10 Brazilian cities, [38] HIV prevalence was estimated as 4.8% (95% CI: 3.4%-6.1%), approximately 12 times higher than the estimated prevalence in the Brazilian female population. Seven years later, the findings of the present study showed no significant change in HIV prevalence, which remain at the same 5% level, with overlapping 95% CI (4.4%-6.2%). A large and significant homophily effect was found as well.
The recruitment of a large number of FSWs in 12 Brazilian cities, in a short time period, at a relatively low cost compared to studies conducted in high-income countries, and the use of appropriate statistical procedures in data analysis, indicate that RDS is a feasible methodology for the study of FSW in Brazil. The experience of the previous RDS study enabled us to improve the techniques for data analysis and all the logistic regression models used in the present study took into account the HIV infection homophily effect and the intraclass correlation between recruited FSW by the same participant. [39] To identify the main predictors of HIV infection, we constructed indicators based on different aspects that characterize the current HIV/AIDS epidemic in Brazil among FSW. In relation to socio-demographic and commercial sex characteristics, low educational level, street as the main work venue, low price per sexual encounter, and longer exposure time as a sex worker were found to be the main predictors of HIV infection.
Our results corroborate the results of other international studies among FSW [42][43][44] and the results of the previous 2009 BBSS. [39] Older women, in addition to having a longer period of sex work exposure, who charge less for their services, have lower education levels and, for the most part, work in the streets, are factors that have been shown to be associated with HIV infection. As to the use of alcohol and illicit drug use, our findings reiterate the effects of a greater HIV vulnerability associated to unprotected sex. [45] The possibility of not using condoms in some specific situations, such as not having condom available at the time of the sexual encounter, showed a significant effect on HIV infection as well. Data from previous surveys in Brazil evidenced a tendency for FSW to report consistent condom use with clients, especially when interviewed by health staff. However, when questions are asked indirectly, they reveal not using condoms in several circumstances. [4,9] Regarding participation in prevention activities, the results showed a higher chance of HIV infection among women who reported being affiliated to or participating in FSW-oriented NGOs. This finding suggests that HIV-infected women may have sought the support given by NGO activists because of their HIV infection. Unfortunately, in the current situation of weakening of NGOs in Brazil, the role of these institutions has been less and less focused on prevention and health promotion, as had historically occurred in Brazil. [46] The findings on health services utilization indicated a smaller HIV prevalence among FSW tested for HIV over the past 2 years. Frequency of HIV testing represents the individual concern with preventive health care but also self-perception of risk. Despite the nonsignificant OR, the lower chance of being HIV infected among FSW who had tested for HIV over the past 2 years suggests improvements in HIV testing mainly due to prevention attitudes. [47,48] The occurrence of other STI indicated by the presence of blisters on the vagina or anus and syphilis were the most significant determinants of HIV infection. These findings reveal not only exposure in the past to unsafe practices related to STI, [9] but also may reflect the enhancement of STI on HIV transmission. [49] History of sexual violence was shown to be a relevant factor associated with HIV infection. Although prostitution in Brazil is not considered a crime under the National Constitution, FSW constantly experience human rights violations such as physical and sexual violence usually perpetrated by partners, family members, and clients. [50] According to the World Health Organization, [51] violence has a direct impact on the adoption of safe sex practices among FSW. Engagement in violent and unprotected sexual practices, even against their will, reflects the stigma and discrimination suffered by these women, factors that have been shown to be strongly associated with adverse health outcomes. [52,53] The results of the multivariate analysis showed that the association of some variables with HIV infection persisted, such as effect of lower education and cheaper fee for services, working at street spots, longer exposure time of sex work, syphilis, and crack use at least once a week. It is important to note that the use of multivariate models on the data collected by RDS often renders variables that lose statistical significance due to the complex sampling design with over-control of the homophily effect, or to adjustments for confounding. Other limitations are related to the cross-sectional design, for which the analysis of causality is restricted since temporality is not addressed in this type of study. [54] In conclusion, the main factors associated with HIV infection identified in this multivariate analysis characterize a specific type of street-based commercial sex work in Brazil: older women with none or very low degree of instruction, who charge less for the sexual encounter and frequently engage in higher risk sexual behavior. The small fee per sexual encounter is a determinant of the type of client, in general of low socioeconomic status and who are more likely to request unprotected sex. [27] Besides providing prevention knowledge and health promotion, interventions focusing on low-paying sex workers must emphasize the risk associated to unsafe sexual behavior with both clients and steady partners. [17] Ultimately, although the statistical analyses provide valuable information for developing targeted interventions, there is a further need to address other contextual factors. FSWs are exposed to multiple harms including illicit drug use, violence and criminality, exploitation, as well as stigma and discrimination. [55] Thus, comprehensive social interventions must focus on the multiple needs of this vulnerable population, including individual and contextual factors that can influence sexual behavior.
---
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. |
Background: With a growing health demand, patient satisfaction analysis is essential for evaluating the accessibility and performance of medical services. Previous studies had explored the Chinese outpatient satisfaction and influencing factors in developed areas and tertiary hospitals. Considering the lower education level, less income, and heavier economic burden, it was necessary to conduct a region-specific questionnaire survey for the outpatient's satisfaction in rural Western China. Objective: To analyze the satisfaction of primary outpatient service in rural Western China, and explore the factors affecting outpatients' satisfaction. Methods: Questionnaire composed of nine 5-Likert items was applied to survey outpatient satisfaction among randomly selected samples in 11 provinces of Western China. Exploratory factor analysis (EFA) was conducted to study the factor structure of questionnaire. Stepwise multiple linear regression analysis was performed to study the influencing factors. Results: A total of 2,754 outpatients completed the questionnaire, the response rate was 88.7%. Respondents were most satisfied with medical staff service attitude (3.71±0.83) and least satisfied with medical cost (2.97±0.83). A 3-factor solution was adopted in EFA to explain the overall satisfaction. Factors identified were "Service attitude", "Facility and professional skills", and "Patients' cost". And, the questionnaire was proved to have good reliability and acceptable internal consistency. The stepwise multiple linear regression analysis results presented that factors, including sample hospital type (P,0.05), age (P,0.001), education level (P,0.05), occupation (P,0.01), monthly income (P,0.05), and chronic disease conditions (P,0.01) were significantly associated with the dimensional or overall satisfaction.The primary health care outpatient satisfaction in rural Western China is lower than developed areas and tertiary hospitals. Care providers in backward regions should pay more attention to patients' demographic characteristics and health status, to meet outpatients' actual demand. Efficient hospital management methods, modern technology, and staff training are needed to improve the service quality and care efficiency. | Introduction
Patient satisfaction is consumers' evaluation about the effectiveness, safety, and benefit of health care service, which is a combination of patients' experience and perception. [1][2][3] Patient satisfaction is an important and commonly used indicator for measuring the quality of health care, and higher patient satisfaction would lead to better clinical outcomes and less care resource utilization. Therefore, patient satisfaction survey is essential for patients, health care provider, and health care payer. [4][5][6] correspondence: Ying Bian institute of chinese Medical sciences, University of Macau, Avenida da Universidade, room 2055, n22 Building, Taipa, Macau sAr, china Tel +86 853 6520 5586 email [email protected] Patient Preference and Adherence downloaded from https://www.dovepress.com/ For personal use only.
This article was published in the following Dove Press journal: Patient Preference and Adherence
Most patient satisfaction studies were conducted in the USA and European countries, suggesting that patients in the flourishing regions tend to evaluate the quality of health care service based on waiting time, medical staffs' proficiency, hospital environment, and participation in the medical decision making. 3,4,[7][8][9][10] There are several recent patient assessment studies, which were conducted in developing countries, including India, Thailand, Tanzania, and Ethiopia. Patients in these countries care more about the location of health facility, hospital comfort, and access to appropriate services. [11][12][13][14][15] Patients perception varies according to education level, age, income, and residence. 15 As proposed in previous literature, 16 with the increasing population and patient expectation, patient satisfaction analysis is essential to evaluate the accessibility and quality of medical service, especially in developing countries, such as China. Some researchers had explored the outpatient satisfaction and factors affecting Chinese patients' satisfaction, mostly from developed provinces or tertiary hospitals, performing descriptive analysis and satisfaction ratings survey. 17,18 Questionnaire is a commonly used satisfaction survey instrument. As reported in studies conducting univariate or regression analysis, factors, including hospital environment, medical facility, service attitude, patients' involvement in decision making, doctors and nurses' proficient skills, effective communication between patients and doctors, disease severity, medical cost, waiting time, and service time were associated with Chinese outpatients' satisfaction in advanced areas or tertiary hospitals. [19][20][21][22][23] Similar results were also previously demonstrated in Hong Kong and Taiwan studies. [24][25][26] According to Grossman (1973), 27 patients' demand for medical services is associated with demographic characteristics and socioeconomic conditions; therefore, patients' demand and satisfaction in backward areas should be measured in a different way from developed areas. 28 So far, no questionnaire studies have been designed to assess primary health care outpatient's satisfaction with a largesample evidence covering different backward provinces in China. Considering the relatively lower education level, less individual income, and heavier economic burden, it is necessary to process a region-specific questionnaire survey for the outpatient's satisfaction in rural Western China. Consequently, the objective of this study was to conduct a satisfaction survey in primary outpatient service in rural Western China and explore the factors affecting outpatients' satisfaction.
---
Materials and methods
This research was approved by the research ethics committee of Institute of Chinese Medical Sciences, University of Macau.
---
Questionnaire development
The initial questionnaire draft was designed based on the literature review, and the literature search was conducted using CNKI and PubMed database in June 2014, by keywords including "outpatient", "satisfaction", "China" and "questionnaire". More than 50 previous outpatients' satisfaction survey and relevant studies were screened, and the adaptive information was extracted to compose the item pool. And, local medical price, reimbursement percentage, residents' income, and education level were considered during the pilot study. According to the pilot study results and local physicians' advice, the final draft version contains 9 items, including time spent in commuting to hospital, waiting time, doctors' disease description, patients' participation in decision making, staff service attitude, hospital facility, hospital environment, medical cost, and doctors and nurses' professional skills. Interviewees were also asked to fill their background information, including age, gender, occupation, education level, monthly income, medical insurance type, and condition of chronic diseases.
The questionnaire was designed as a 5-point Likert scale, 29 and interviewees were asked to rate each item: very dissatisfied (1), dissatisfied (2), neither satisfied nor dissatisfied (3), satisfied (4), and very satisfied (5).
sampling Eleven provincial-level divisions in Western China were selected to explore the outpatient satisfaction, including Ningxia Hui Autonomous Region, Guangxi Zhuang Autonomous Region, Xinjiang Weiwuer Autonomous Region, Gansu Province, Shaanxi province, Qinghai province, Sichuan province, Guizhou Province, Yunnan Province, Inner Mongolia, and Tibet Autonomous Region. In each province, all counties were divided into 3 levels by GDP per capita, and 1 sample county was randomly selected from each level of 11 provinces. The county general hospital, maternal and child health center, and hospital of Traditional Chinese Medicine in each sample county were recruited as sample hospitals. Fifty outpatients drawn randomly from each sample hospital were enrolled into the study, receiving the questionnaire when leaving the hospital. Written informed consent was obtained from all interviewees before filling the questionnaire. All questions were explained by trained investigators.
The questionnaire for interviewees aged ,14 years was answered by their adult supervisor.
---
statistical analysis
The missing item and total response rates were used to assess the questionnaire acceptability and feasibility. Descriptive data were tabulated to present participants' demographic and other background characteristics. Exploratory factor analysis (EFA), using principal component analysis and varimax rotation, was conducted to assess the dimensionality of overall satisfaction, evaluate the structure validity, and reduce the number of variables. [30][31][32] Multiple linear regression analysis in a stepwise method (p for removal was set at 0.1 and p for entry at 0.05) was implied to explore the association between outpatients' characteristics and satisfaction factor scores. Outpatients' background characteristics that showed significant difference in univariate analysis were included as independent variables, with dimensional and overall factor scores as dependent variables.
All data analysis was performed using SPSS version 19.0.
---
Results
The questionnaire survey was conducted from October to December 2014. A total of 3,193 patients participated in the survey, and 2,754 questionnaires were fully completed. The missing value rate for each item was 0.2%-0.5%, and the total response rate was 88.7%, indicating that the questionnaire was acceptable and feasible.
---
Descriptive findings
The descriptive results of participants' demographic and other background characteristics are presented in Table 1. The mean age of respondents was 36.86 (SD=14.30), 40% were female, and 60% were male. Only 40% of interviewees had completed at least high school education. Farming was the most common occupation, and 70.4% of responders' income was ,2001 yuan (~330 USD) per month. Most participants were enrolled in public medical insurance, 14.6% of respondents were insured by Urban Employee Basic Medical Insurance (UEBMI), 9.5% by Urban Resident Basic Medical Insurance (URBMI), and 60.2% by New Rural Cooperative Medical Scheme (NRCMS). A total of 20% of the participants had chronic diseases.
---
Outpatients satisfaction item scores
The results of the outpatients' satisfaction survey are shown in detail in Table 2. According to the mean scores, respondents were most satisfied with medical staff service attitude (3.71±0.83) and second, doctors' disease description (3.64±0.84). Medical cost was the least satisfactory item (2.97±0.83). The sum of mean scores of 9 items was 30.58, with the maximum score of 45, which was relatively lower than the average satisfaction scores in previous studies.
---
Factor analysis
EFA was conducted to explore the dimensionality of the overall outpatients' satisfaction in rural Western China and analyze the validity of the dimensional structure.
The overall Cronbach's α value was 0.75, suggesting good reliability. The Kaiser-Meyer-Olkin measure for the dataset was 0.804, the Bartlett's test was 6,002.289 (P,0.001), and all item-total correlations exceeded 0.50, implying that the data were adequate for EFA. [31][32][33] Principal component analysis and varimax rotation were adopted. As shown in Table 3, only 2 factors had eigenvalues .1. 34 However, combined with the cumulative variance percentage and scree plot (see Figure 1), 35,36 a 3-factor solution was applied in the factor analysis.
According to the factor loading values, 9 items were explained by 3 dimensions. Factor 1 "Service attitude" consisted of the 3 items of the questionnaire, including patients' participation in decision making, doctors' disease description, and medical staff service attitude. Factor 2 "Facility and professional skills" consisted of hospital environment, hospital facility, and doctors and nurses' professional skills. Factor 3 "Patients' cost" consisted of 3 items, including waiting time, time spent commuting to hospital, and medical cost.
The internal consistency of the matrix was examined. For Factors 1 and 2, the Cronbach's α values were 0.733 and 0.740, respectively, which were considered very good. The inter-item correlation (IIC) can also be adopted as a measure of internal consistency when the number of items in the scale was ,10, which was acceptable between 0.2 and 0.4. Thus, the internal consistency of Factor 3 was considered acceptable although the Cronbach's α value was only 0.542. 32,37 More details of EFA results are available in Table 4.
---
Factors associated with outpatient's satisfaction
The regression method was used to estimate the factor score coefficients, and the scores of Factor 1 (F 1 ), Factor 2 (F 2 ), and Factor 3 (F 3 ) were produced by SPSS software. To comprehensively explore the characteristics associated with outpatients' satisfaction, the factor score of overall satisfaction (F) was calculated based on the score and variance contribution rate of the 3 main factors: 31,38
F = 0.22872*F 1 + 0.21113*F 2 + 0.18192*F 3
Stepwise multiple linear regression was conducted to investigate the factors influencing the 3 main dimensions and the overall satisfaction. Patients' age, sample hospital type, education, occupation, monthly income, medical insurance type, and chronic disease condition showed significant differences in the univariate analysis, thus included in regression model as independent variables, with F 1 , F 2 , F 3 and F as dependent variables.
Table 5 presents the results of the stepwise multiple linear regression, which demonstrated that the age, sample hospital type, education, occupation, monthly income, and According to the coefficients and P-values, significant differences were observed among different respondent populations in dimensional satisfaction: older outpatients had higher satisfaction score in all 3 dimensions; interviewees from general county hospitals were more satisfied with "Facility and professional skills", but less satisfied with other 2 dimensions than other county hospitals; compared with lower educated patients, respondents who had completed at least junior middle school were more satisfied with "Service attitude" and "Facility and professional skills", less satisfied with "Patients' cost"; outpatients with chronic diseases graded higher in "Service attitude" and "Facility and professional skills"; higher income participants were more satisfied with "Patients' cost", and less satisfied with "Service attitude"; and teachers, governments staff, service industry workers, business workers, enterprise employee, and retirees, in comparison with farmers, workers, students, and the unemployed, were more satisfied with "Service attitude".
Different respondent populations also had significant differences in overall satisfaction: the overall satisfaction increased significantly with age; interviewees from other county hospitals were more satisfied than those from general county hospitals; outpatients with chronic diseases graded higher overall satisfaction; and higher educated respondents were more satisfied; teachers, governments staff, service industry workers, business workers, enterprise employees and retirees, compared with farmers, workers, students and the unemployed, were more satisfied with overall outpatient health care.
---
Discussion
Patients' satisfaction is an important and commonly used indicator to analyze the patients' demand, performance, and utilization of medical service. Therefore, patients' satisfaction research is essential in the process of China's health system reform. Although several patient satisfaction reports had been published before, no intensive study was conducted to investigate outpatients' satisfaction with large-sample evidence covering China's backward provinces.
In this research, county hospital outpatients' satisfaction of 11 provinces in Western China were analyzed. A total of 2,754 outpatients completed the questionnaire, and the response rate was 88.7%. Among included respondents, 45.5% were farmers and 60.2% were insured with the NRCMS. According to China's National Health Service Survey (NHSS) report, 2013, 39 Western China residents have lower outpatient satisfaction rate than Middle and Eastern China. Rural residents have higher satisfaction rate with regard to "medical staff" (80.3%-82.7%) than "hospital environment" (67.9%) and "waiting time" (70.7%), supporting the results of higher satisfaction in "service attitude", "doctors' disease description" and "medical staff proficient skills" in this paper. "Medical cost" was the least satisfactory item, which had also been previously presented in NHSS report and outpatients' satisfaction study in Beijing. 23,39 In this study, no satisfaction item core exceeded 3.8, less than the overall satisfaction scores (4.67±0.62) for outpatient in tertiary hospitals, 20 the result is consistent with previous references, 40,41 suggesting that outpatients in primary health care in rural Western China were relatively less satisfied with medical service than developed areas or tertiary hospitals. EFA results showed a 3-dimension structure of overall outpatient satisfaction, "Service attitude", "Facility and professional skills", and "Patients' cost" with 3 items, respectively, and demonstrated acceptable reliability and validity. The stepwise multiple linear regression results indicated that age, sample hospital type, education, occupation, monthly income, and chronic disease condition were statistically associated with dimensional or overall outpatient satisfaction.
The overall satisfaction increased with age, which was agreeable to the report of outpatient satisfaction in Ningxia province and Shanghai. 21,42 Respondents with chronic diseases tended to have higher overall satisfaction, and a similar trend was observed in satisfaction study from developed areas. 23 It is possible that the elderly and chronic patients, compared with the young and patients without chronic diseases, were more experienced and trusted their doctors due to their health status. Participants from county-level general hospital were less satisfied with "Patients' cost" than other county hospitals, since waiting time was reported as the most important item of patient satisfaction in general hospitals, 43 this result was in agreement with the lower overall satisfaction in county general hospital. Outpatients with higher income were more satisfied with "Patient cost" but less satisfied with "Service attitude", predicting the increasing demand for better service attitude with the growing income of Chinese residents. Contrary to previous evidence, 44 although less satisfied with "Patients' cost", better educated outpatients were more satisfied with "Service attitude", "Facility and professional skills", and overall outpatient service. It is probably the accessibility to high-quality medical service rather than the affordability that influence the overall satisfaction of educated outpatients in rural Western China. Teachers, governments staff, service industry workers, business workers, enterprise employees, and retirees had higher overall satisfaction than farmers, workers, students, and the unemployed, which may be related to the stable income and higher reimbursement rate of the former.
Patient satisfaction is a reflection of the residents' health care demand, understanding the demand of patients under different conditions plays a crucial role in improving the performance and efficiency of medical services. There are similarities and differences between the results in this survey and previous literatures, which suggest that outpatients' health care demand in rural Western China have their own uniqueness. As reported in China's NHSS, 2013, 39 "high medical cost" was the primary cause of rural outpatient dissatisfaction in recent years, and then "poor professional skills" and "bad service attitude". However, considering the association between outpatients' characteristics and satisfaction in this study, primary demand of some rural outpatients has altered from lower price to higher efficiency, better service attitude, and professional skills. Since China has implemented the national Essential Medicines List in 2009 and required no drug profit margins in public hospitals in 2017, a universal access to affordable essential medicines would be promoted in a few years, health care providers should identify and target more initiatives to improve service attitude, environment, and professional skills. In rural Western China, efficient hospital management methods, modern technologies, and more staff training are needed to improve the health care service quality. Implementation of electronic patient record system and consultation desk in each department may reduce the patient queue time and doctor preparation time in general hospitals. 45 More interpersonal communication training would help doctors and nurses in disease explanation and promoting service attitude. 46 Rural primary health care institutions are also suggested to develop more patient education about chronic disease management, promote patient participation, and improve the care efficiency. 47 Most of published satisfaction questionnaire survey on Chinese patients were single-center study in tertiary hospitals or developed regions, 23,48 conducting regression analysis on mean satisfaction score or satisfaction rate. 21,44 Compared with previous literature, a questionnaire was developed in this study to collect primary health care outpatients' satisfaction score in rural Western China, then EFA and multiple linear regression were conducted to describe the satisfaction dimension and associated factors, exploring rural outpatient satisfaction from another perspective. It is the first outpatient satisfaction questionnaire study based on EFA with multiprovince evidence in backward China, and the questionnaire showed acceptable reliability and good feasibility.
---
limitations
Although there was a large sample size and reliable results, it should be noted that this study has some limitations: 1) the research sampling was not conducted based on the population distribution of 11 provinces, which may cause the deviation of sample resource. 2) Differences of economic level and medical service quality among 11 provinces were not controlled, by which the satisfaction difference caused could not be explained in this study.
---
Conclusion
On the whole, the primary health care outpatient satisfaction in rural Western China is lower than developed areas and tertiary hospitals, "Service attitude", "Facility and professional skills", and "Patients' cost" were the main 3 dimensions of overall satisfaction, with significant differences among patients with different demographic characteristics and chronic disease conditions. Local health care institutions should evaluate and manage the outpatient service quality based on the actual need of patients, considering patients' demographic characteristics and health status. Efficient hospital management methods, modern technologies, and staff training are needed to improve the quality of medical service and care efficiency in backward areas.
---
Disclosure
The authors report no conflicts of interest in this work.
---
Patient Preference and Adherence
---
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/patient-preference-and-adherence-journal Patient Preference and Adherence is an international, peer-reviewed, open access journal that focuses on the growing importance of patient preference and adherence throughout the therapeutic continuum. Patient satisfaction, acceptability, quality of life, compliance, persistence and their role in developing new therapeutic modalities and compounds to optimize clinical outcomes for existing disease states are major areas of interest for the journal. This journal has been accepted for indexing on PubMed Central. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www. dovepress.com/testimonials.php to read real quotes from published authors. |
Background The association between socioeconomic disadvantage (low education and/or income) and head and neck cancer is well established, with smoking and alcohol consumption explaining up to three-quarters of the risk. We aimed to investigate the nature of and explanations for head and neck cancer risk associated with occupational socioeconomic prestige (a perceptual measure of psychosocial status), occupational socioeconomic position and manual-work experience, and to assess the potential explanatory role of occupational exposures. Methods Pooled analysis included 5818 patients with head and neck cancer (and 7326 control participants) from five studies in Europe and South America. Lifetime job histories were coded to: (1) occupational social prestige-Treiman's Standard International Occupational Prestige Scale (SIOPS); (2) occupational socioeconomic position-International Socio-Economic Index (ISEI); and (3) manual/non-manual jobs. Results For the longest held job, adjusting for smoking, alcohol and nature of occupation, increased head and neck cancer risk estimates were observed for low SIOPS OR=1.88 (95% CI: 1.64 to 2.17), low ISEI OR=1.74 (95% CI: 1.51 to 1.99) and manual occupations OR=1.49 (95% CI: 1.35 to 1.64). Following mutual adjustment by socioeconomic exposures, risk associated with low SIOPS remained OR=1.59 (95% CI: 1.30 to 1.94). Conclusions These findings indicate that low occupational socioeconomic prestige, position and manual work are associated with head and neck cancer, and such risks are only partly explained by smoking, alcohol and occupational exposures. Perceptual occupational psychosocial status (SIOPS) appears to be the strongest socioeconomic factor, relative to socioeconomic position and manual/non-manual work. | INTRODUCTION
Globally, head and neck cancers, comprising cancers of the oral cavity, oropharynx, hypopharynx and larynx, account for over 700 000 new cases diagnosed and over 350 000 deaths each year, representing 4% of all new cancers in Europe and South America. 1 2 Worldwide, trends of these cancers are on the rise-particularly in the oropharyngeal cancer subsite. [3][4][5] The major risk factors for head and neck cancer are tobacco use and alcohol consumption (particularly in combination), which comprise around 70% of the population attributable risk. 6 7 Human papillomavirus (HPV) infection is an emerging risk factor for oropharyngeal cancer. 8 9 Across all head and neck cancers, socioeconomic risk associations are comparable in magnitude to those of behavioural risk factors, with the greatest burden of head and neck cancer observed in those with the lowest incomes and education levels. 10 Tobacco smoking and alcohol consumption explain approximately two-thirds of the socioeconomic relationship, and this association persists when controlling for smoking or alcohol behaviour and among never smokers and never alcohol drinkers. 10 A previous systematic review and meta-analysis of published risk estimates found consistent elevated risk for oral cancer associated with low occupational socioeconomic position, 11 and an earlier small case-control study of larynx cancer suggested the occupational socioeconomic relationship was partly explained by smoking, alcohol consumption and substantially attributed to occupational exposures. 12 The relationship between occupational-related socioeconomic factors and head and neck cancer risk has not been examined in detail. Socioeconomic classification of occupations is multidimensional
---
Original research
and includes measures of occupational social position, prestige and class. 13 14 While occupational social classifications are largely related to the income and/or educational attainment required for the job, occupational social prestige explicitly relates to ranking of jobs based on normative admiration or respect. 15 Occupational socioeconomic prestige is derived from multiple factors such as psychosocial aspects, work stress, job control and social support networks. 13 14 Low relative to high and downward lifetime trajectories of occupational socioeconomic prestige have previously been linked with cancer risk 16 and particularly lung cancer in men. 15 Here we investigate the risk associations of occupational social prestige, occupational socioeconomic position, and manual occupations for head and neck cancer. We thoroughly assess explanatory factors including smoking, alcohol and occupational exposures, and we explore differences in these risk associations by gender, global region, and head and neck cancer subsite.
---
METHODS
The original data studies of the International Head and Neck Cancer Epidemiology (INHANCE) Consortium (http:// inhance. iarc. fr/) have been described in detail elsewhere. 6 17 Briefly, we used data from five frequency-matched case-control studies, which provided databases with occupational histories, containing occupational and industrial codes, in addition to the INHANCE pooled database (V.1.5). We included studies from Western Europe, 18 Latin America, 19 Germany (Heidelberg), 20 and two studies from France (1989-1991) 21 and (2001-2007), 22 which were all multicentre studies except for the German study. Online supplemental file 1 shows the main characteristics of these studies. We omitted participants with missing information on smoking behaviour (n=176), alcohol consumption (n=218), and missing or largely incomplete occupational history data (n=1071).
Cases comprised cancers of the oral cavity, oropharynx, hypopharynx and larynx. Control participants were recruited either in hospitals (France (1989-1991), Latin America) or in the general population (France (2001-2007), Germany (Heidelberg)). Both types of recruitment were used in the Western Europe study (online supplemental file 1).
---
Occupational socioeconomic position and prestige data
We assigned indices of socioeconomic position and prestige on the basis of participants' occupational histories, which contained job periods already coded by the International Standard Classification of Occupations of 1968 (ISCO68). 23 We considered occupational histories before retirement, reviewed all job periods, and deleted periods with missing or implausible information for ISCO68, start year or end year. We then excluded data of participants from the analysis if: their occupational history spanned less than 10 years, but only if they were also >30 years at the time of the study 15 ; and if less than 50% of their job history had ISCO68 codes.
We assigned Treiman's Standard International Occupational Prestige Scale (SIOPS) to the job histories. 24 SIOPS assigns prestige ratings to occupations, ranging from 14 (lowest prestige, for example, unspecified and unskilled agricultural workers) to 78 (highest prestige, for example, physicians). Based on the distribution of SIOPS scores among controls, we categorised the SIOPS score range into quartiles (14-30, 31-39, 40-48, 49-78). We also coded the jobs to the International Socio-Economic Index of occupational position (ISEI) in the version corresponding to ISCO68, 25 which comprises scores with a range from 10 (lowest position, for example, cook's helpers) to 90 (highest position, judges). As for SIOPS, we constructed quartiles based on the ISEI distribution in the control group (10-31, 32-39, 40-55, 56-90). Both, SIOPS and ISEI, were assigned on the basis of three-digit levels of ISCO68 codes. We further applied ISCO68 codes to manual and non-manual job groupings as previously described. 26 For analyses, from the coded occupational histories, we selected the longest held job for the primary analyses, but also assessed the first job, last job, the jobs with the highest ever reached SIOPS and ISEI scores, and 'ever employed in manual job', respectively.
Occupational data were further used to represent occupational exposure to carcinogens for head and neck cancer. We integrated the investigated ISCO68 categories in a new list of risk occupations (online supplemental file 2) where (a) ORs for the comparison of ever versus never having worked in an ISCO68 occupation were elevated and (b) if ORs were increasing for 10 or more years of employment. Our job history data did not contain sufficient information to accurately assign industries and assess their risk associations. Although based on results for men, we applied the new list of risk occupations to both men and women. We distinguished whether participants were ever employed in risk occupations for 10 or more years. 27 28 Finally, based on additional coding from three studies (Western Europe, France (2001-2007) and Germany (Heidelberg)), we characterised participants as ever or never having experienced unemployment.
---
Statistical analysis
We investigated head and neck cancer risk associations with occupational socioeconomic prestige, position, manual versus non-manual occupation and unemployment experience. We estimated ORs with 95% CIs by unconditional logistic regression. Based on a model adjusting for sex, age (years) and study centre (model 1), we added further variables in cumulative steps to study the impact on the investigated association. We first added cigarette smoking behaviour (smoking status (never, former, current), duration (years), smoking intensity (average daily amount of cigarettes) and cigarette pack-years (model 2)). Never smokers were participants who had smoked less than 100 cigarettes during their lifetime. Former smokers were participants who quit smoking more than 1 year before study participation. In the next step, we additionally considered alcohol consumption (model 3) by adjusting for drinking status (ever/ never), drinking intensity, that is, average amount of alcoholic drinks per day (15.6 mL of ethanol per drink), and an interaction term of smoking (duration) and alcohol (intensity). 6 We further adjusted for ever/never employed in a risk occupation (at least 10 years) ('full' model 4).
---
Sensitivity and stratified analyses
We further adjusted for the respective other socioeconomic position and prestige variables (SIOPS, ISEI, manual/non-manual) (model 5). We applied model 5 to analyse unemployment, but did not adjust for unemployment due to the missing data. The main analyses were based on the longest held job. Additional sensitivity analyses involved using the first and the last job as well as the highest ever reached SIOPS/ISEI or 'ever employment in manual job', respectively. We alternatively included SIOPS and ISEI as continuous variables. All further analyses were also based on SIOPS for the longest job and the 'full' model. Analyses were stratified by sex, tumour subsite (oral cavity, oropharynx, hypopharynx, larynx), study region (Europe, Latin America),
---
Original research
type of control recruitment (hospital or population-based), and single as well as combined stratification for ever or never use of cigarettes and alcohol. Further sensitivity analyses included exploring differences observed by study regions; and-using model 1-examining those participants who were initially excluded because of largely incomplete occupational histories. Finally, we performed multiple imputation on missing smoking and alcohol information (predicted on respective available smoking and alcohol data by age, sex and study centre), and recalculated model 4. All analyses were performed with SAS V.9.4 (SAS Institute).
---
RESULTS
We included 13 144 participants (5818 cases, 7326 controls) in the final analysis. Table 1 describes the study population. Lower categories of socioeconomic position and prestige indices were more frequent among cases. Only about one-third of overall cases had longest held jobs in the first or second quartiles of SIOPS and ISEI, respectively, whereas this proportion was about 50% among controls. Overall, 36% of cases compared with 22% of controls had ever worked in a risk occupation for at least 10 years, with lower proportions for women. Unemployment experience (data available for three of the five studies; approximately three-quarters of participants) was slightly higher for male cases than male controls.
Associations of occupational socioeconomic position and prestige are shown in table 2. For all indices, ORs increased with lower position/prestige. ORs were attenuated by all further adjustments, with the greatest effect through adjustment for cigarette smoking. Adjustment for alcohol consumption and employment in risk occupations only marginally reduced risk estimates. After adjustment for all behaviours and risk occupations, strong associations between low position/prestige and head and neck cancer persisted, with ORs for the lowest relative to highest categories of SIOPS: 1.88 (95% CI: 1.64 to 2.17), ISEI: 1.74 (95% CI: 1.51 to 1.99) and manual occupations: 1.49 (95% CI: 1.35 to 1.64). Accordingly, SIOPS and ISEI on a continuous scale were significant parameters in the fully adjusted model (online supplemental file 3).
In the model, mutually adjusting for other socioeconomic measures, SIOPS risk association remained OR 1.59 (95% CI: 1.30 to 1.94). Additional, sensitivity analyses showed risk associations were slightly lower for the first job, and elevated for the last job and highest SIOPS and ISEI (online supplemental file 4). The subgroup analysis of participants who had ever experienced unemployment showed slightly elevated risks for head and neck cancer in the fully adjusted model.
Results for the stratified analyses of risk associations are shown in table 3A,B for SIOPS, and in online supplemental file 5A,B) for both ISEI and manual/non-manual occupation. The risk associations were consistently lower for women than men. In contrast to the European studies, we did not find a similar strength of association in Latin America. When we stratified by tumour subsite, we found stronger associations for cancer of the larynx (OR 1.96 (95% CI: 1.60 to 2.42)) and hypopharynx (OR 2.61 (95% CI: 1.92 to 3.55)) than oral cavity (OR 1.63 (95% CI: 1.27 to 2.09)) or oropharynx (OR 1.68 (95% CI: 1.34 to 2.11)). Stratification by type of control recruitment showed increased ORs for population-based recruitment, and reduced ORs for hospital-based recruitment. Risk associations for low relative to high SIOPS reduced among never smokers and never alcohol drinkers (combined), with greater attenuation associated with never smokers (only) than never drinkers (only). Sensitivity analysis including participants initially excluded due to largely incomplete occupational histories did not change estimates, either for Europe or for Latin America; nor did multiple imputation for missing smoking and alcohol information only marginally changed estimates (data not shown).
---
DISCUSSION
We found consistently elevated risk associations for head and neck cancer with low occupational social prestige, low occupational socioeconomic position and manual work. These findings were only partly explained by smoking, alcohol drinking or working in recognised higher risk occupations. However, among the small subgroup of never smokers and never drinkers, the risks associated with lower social prestige and class were completely attenuated. The overall findings were stronger among men than women, for cancers of the larynx and hypopharynx, and observed in Europe, but not in Latin America.
Inequalities in health outcomes (including cancer) are driven by social determinants-by inequalities in income, wealth and power. 29 Our analysis taps into several of these domains, particularly the power relationships that arise from different occupational strata (captured here by social prestige), and shown to be important in health outcomes. 30 SIOPS is based on the social prestige given to different occupational groupings. McCartney et al recently reappraised theories of social class and their application to the study of health inequalities. 31 They noted that SIOPS and ISEI, unlike traditional categorical occupational social class schemes, employ a continuous or gradational hierarchy-based on relative social advantage. 32 While ISEI captures more material aspects of socioeconomic position, as it is derived from education and income aspects of occupations, the use of the SIOPS ('prestige') measure enables more direct inference of the psychosocial dimension. [13][14][15][16] Although SIOPS, ISEI and manual versus nonmanual reflect different socioeconomic 'class' dimensions, they all are occupation-based indices and are known to be strongly correlated. 25 We found the strongest head and neck cancer risk associations for prestige, with socioeconomic position and manual occupations slightly lower. This points to the importance of psychosocial and material dimensions of occupational socioeconomic relationship with head and neck cancer, although the environmental aspect is also relevant.
While there are recognised head and neck cancer risk associations with certain occupations, 27 we found only a limited inter-relationship between occupational risk and the socioeconomic dimensions of occupations. Earlier studies suggested that occupational exposures were responsible for about one-third of total cancer difference between high and low socioeconomic groups. 33 In our data, for head and neck cancer, occupational exposures attenuated the socioeconomic excess risk associations (model 4 vs model 3) by around 20%. However, this type of comparison of estimates may be biased in logistic regression models. 34 35 Smoking is undoubtedly a major risk factor for head and neck cancer 6 and a major explanatory factor for all socioeconomic health inequalities. 10 Alcohol consumption also compounds head and neck cancer risk, 6 7 and clustering of these risk factors is also observed in lower socioeconomic groups. 11 We observed, following thorough adjustment of many dimensions of smoking and alcohol behaviours, that the risk associations with occupational socioeconomic measures reduced (but not fully). Elevated head and neck cancer risks associated with lower socioeconomic positions among never
---
Original research
---
Continued
---
Original research
smokers and/or never alcohol drinkers suggest some potential residual effects of smoking and alcohol consumption. However, it should be noted that there are very small numbers of never smokers and never drinkers which make this estimate less reliable. Non-linearity of smoking and alcohol could risk misspecification and residual confounding 36 -we undertook a post-hoc analysis with log-transformed smoking and alcohol variables which did not change the socioeconomic factors' risk association (data not shown). Stronger socioeconomic risk associations for hypopharynx and larynx cancers compared with oral cavity and oropharynx cancers point to a dominant role of smoking in explaining these associations. A previous INHANCE analysis showed that smoking had a significantly greater risk association for laryngeal cancer than oral cavity/ pharynx cancer. 37 However, because alcohol and smoking are highly correlated, when adjusting for smoking, there is likely to be some adjustment for alcohol drinking, so alcohol's role in contributing to inequalities in head and neck cancer cannot be discounted. Health inequalities and cancer risks associated with socioeconomic factors have generally been observed to be stronger among men than women. 38 Our study is no exception, the likely explanations include lack of data in women, and particular difficulties in older generations in classifying women by occupational social classifications, 13 reflected in the male database that was used for construction of SIOPS/ISEI. 24 25 Suggestions that health inequalities affect women to a lesser degree are increasingly recognised as unfounded. 39 40 Our finding of a lower risk association in Latin America was unexpected as it contradicted those of the original publication of socioeconomic analysis of the data 40 -which found elevated ORs associated with non-manual ('social class') occupations. The socioeconomic distribution of controls was different from the other studies, that is, the Latin American controls were generally
---
Original research
---
Original research
from lower socioeconomic groups, and more similar to the case distribution. Post-hoc analysis, building SIOPS/ISEI quartiles based on the Latin American control distribution (rather than overall control distribution) did not change the findings. The Latin American study employed hospital controls, which we found overall had lower risks (consistent across SIOPS and ISEI). In a further post-hoc analysis, removing the Latin America data from the stratified analysis, the ORs for hospital controls did not change, which could indicate that type of recruitment accounted for the difference rather than study region. Moreover, this continental difference observed was unlikely to be due to conceptual sociological differences in the measures across the countries-as SIOPS has been shown to be stable across very diverse cultures, 24 and ISEI was validated internationally (including Brazil). 25 Our study has several strengths, including the relatively large size with nearly 6000 cases and over 7000 controls from five robust well-designed multicentre case-control studies with harmonised data. 17 41 The large size of the study with good quality socioeconomic and behavioural risk factor data enabled risk estimates to be examined and confounders to be thoroughly adjusted for. Analyses method strengths included multiple sensitivity analyses to test the robustness of the results.
There were also limitations of this study including unquantifiable measurement errors, data availability limitations and residual confounding. We were only able to include 5 of the possible 35 studies in INHANCE, with no studies from North America or indeed South Asia. 41 Included studies had to have prior ISCO-coded occupational histories. The occupational risks derived from these codes are probably too imprecise to indicate specific exposure to occupational carcinogens, so residual confounding is a possibility. It was also not possible to examine the industrial dimensions of occupations in this study as have previously been shown to be related to socioeconomic inequalities in cancer incidence. 42 43 Lifetime duration of alcohol (even over a short period) has begun to be shown to increase cancer risk, 44 however, this variable was missing from some of the studies and could not be included in the analysis. Data on HPV were also not available for the studies in this analysis and could be an important factor particularly in relation to oropharyngeal risks. 8 9 Recall bias is also a possibility, although it is unlikely that cases reported their occupational history differently from controls. 27 In addition, periods of housework or part-time work (more common among women) were excluded and could have underestimated socioeconomic effects. 45 Selection bias could potentially impact the findings particularly in the hospitalbased centres where the controls are potentially of similar socioeconomic and risk behaviour profiles to the case participants. Indeed, our findings were stronger in study centres with population-based design. Previous INHANCE socioeconomic analyses of income and education found no differences between hospital and population-based controls reassuring against the risk of selection bias, and the measures undertaken in the studies which used hospital-based control sampling to reduce selection bias included recruiting patients attending hospital not for cancer nor conditions related to the main behavioural risk factors. 10 Finally, SIOPS and ISEI have not been updated since their creation in the late 20th century, and may not reflect recent occupational socioeconomic structures. However, the indices used were appropriate for the decades when most of the participants were employed, and job ranking by SIOPS has been shown to be consistent over time. 24 There has been a general shift from manual to low-level service occupations which may not be captured by these socioeconomic measures, although this would have had a minimal impact as our data were largely collected in †ORs and 95% CIs adjusted for sex, age, study centre, cigarette smoking (status, duration, cigarettes/day, pack-years), alcohol consumption (ever/never, drinks/day), an interaction term drinks/day×duration cigarette smoking) and worked ≥10 years in risk occupations (10 years before study). ‡Test of interaction between stratification factor and SIOPS. §Categories separately for SIOPS distribution in control groups for men (m) and women (w). ¶Ever/never drank >15.6 mL of ethanol. **Ever/never smoked ≥100 cigarettes in lifetime and ever/never drank alcohol (>15.6 mL of ethanol).
SIOPS, Standard International Occupational Prestige Scale.
---
Table 3 Continued
---
Original research
the early 2000s (with mean participant age of 50-60 years) and further analyses of trajectories of occupational socioeconomic prestige could be subsequently undertaken.
---
CONCLUSIONS
Our results indicate that occupational socioeconomic prestige, position and manual work are associated with head and neck cancer, and this risk is only partly explained by smoking and alcohol exposure. Occupational exposures were not a major explanatory factor as expected given the occupational source of our socioeconomic measures. This points to the importance of psychosocial impacts of socioeconomic factors as well as the more recognised material dimension in head and neck cancer risk. The implications of these results could also extend to the inclusion of psychosocial/socioeconomic occupational factors in the future development of head and neck cancer risk assessment/ prediction tools, and to informing prevention and early detection efforts.
---
What is already known on this subject
► The association between socioeconomic disadvantage (measured by low education and/or income) and head and neck cancer risk is well established. ► Less is known on the risks of head and neck cancer associated with socioeconomic aspects of occupations and the inter-relationship with occupational exposures.
---
What this study adds
► Low occupational socioeconomic prestige and position, and manual work are associated with head and neck cancer, and such risks are only partly explained by smoking, alcohol and occupational exposures. ► Perceptual occupational psychosocial status (Standard International Occupational Prestige Scale) appears to be strongest socioeconomic factors relative to socioeconomic position and manual/non-manual work. ► Implications could extend to the inclusion of psychosocioeconomic occupational factors in future development of head and neck cancer risk prediction tools, and to informing prevention and early detection strategies.
---
http:// orcid. org/ 0000-0001-7762-4063 Jan Hovanec http:// orcid. org/ 0000-0003-1811-1465 Wolfgang Ahrens http:// orcid. org/ 0000-0003-3777-570X Alastair Ross http:// orcid. org/ 0000-0003-2952-3182 Ivana Holcatova http:// orcid. org/ 0000-0002-1366-0337 Diego Serraino http:// orcid. org/ 0000-0003-0565-8920 Cristina Canova http:// orcid. org/ 0000-0001-7027-7935 Lorenzo Richiardi http:// orcid. org/ 0000-0003-0316-9402 Claire Healy http:// orcid. org/ 0000-0001-7940-4611 Kristina Kjaerheim http:// orcid. org/ 0000-0003-0691-3735 Gary J Macfarlane http:// orcid. org/ 0000-0003-2322-3314 Peter Thomson http:// orcid. org/ 0000-0002-2007-7975 Antonio Agudo http:// orcid. org/ 0000-0001-9900-5677 Ariana Znaor http:// orcid. org/ 0000-0002-0518-8714 Danièle Luce http:// orcid. org/ 0000-0002-1708-4584 Gwenn Menvielle http:// orcid. org/ 0000-0002-3261-6366 Simone Benhamou http:// orcid. org/ 0000-0001-5853-8047 Heribert Ramroth http:// orcid. org/ 0000-0001-5958-1717 Paolo Boffetta http:// orcid. org/ 0000-0002-3811-2791 Maria Paula Curado http:// orcid. org/ 0000-0001-8172-2483 Ana Menezes http:// orcid. org/ 0000-0002-2996-9427 Rosalina Koifman http:// orcid. org/ 0000-0002-2746-7597 Thomas Behrens http:// orcid. org/ 0000-0002-4583-5234
---
Competing interests None declared.
---
Patient consent for publication Obtained.
Ethics approval Ethical approval was obtained from appropriate institutional local review boards and all participants provided written informed consent for the original studies.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon reasonable request. Data are available from the corresponding author, DIC, upon reasonable request, with the permission of the INHANCE Consortium.
---
Supplemental material
This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. |
One of the most com mon bar ri ers to using effec tive fam ily plan ning meth ods is the belief that hor monal con tra cep tives and con tra cep tive devices have adverse effects on future fer til ity. Recent evi dence from highincome set tings sug gests that some hor monal con tra cep tive meth ods are asso ci ated with delays in return of fecun dity, yet it is unclear if these find ings gen er al ize to low-and mid dle-income pop u la tions, espe cially in regions where the inject able is widely used and pres sure to bear chil dren is sig nifi cant. Using repro duc tive cal en dar data pooled across 47 Demographic and Health Surveys, we find that the unad justed 12-month prob a bil ity of preg nancy for women attempting preg nancy after discontinuing tra di tional meth ods, con doms, the pill, and the IUD ranged from 86% to 91%. The 12-month prob a bil ity was low est among those who discontinued inject ables and implants, with approx i ma tely 1 out of 5 women not becom ing preg nant within one year after dis con tin u a tion. Results from mul ti var i able anal y sis showed that com pared with users of either peri odic absti nence or with drawal, users of the pill, IUD, inject able, and implant had lower fecundability fol low ing discon tin u a tion, with the larg est reduc tions occur ring among women who used inject ables and implants. These find ings indi cate that women's con cerns about poten tial short-term reduc tions in fecun dity fol low ing con tra cep tive use are not unfounded. | Introduction
Across diverse con texts, one of the most com mon bar ri ers to using effec tive fam ily plan ning meth ods is the belief that hor monal con tra cep tives and con tra cep tive devices have adverse effects on future fer til ity (Boivin et al. 2020;Payne et al. 2016;Williamson et al. 2009). In many regions of the world, espe cially where pres sure to bear chil dren is sig nifi cant (Dyer 2007;Hollos et al. 2009), these bar ri ers are per va sive and expressed by both men and women (Bornstein et al. 2020;Sedlander et al. 2018). Historically, these con cerns have been dismissed as "mis per cep tions," but emerg ing evi dence indi cates that such beliefs may in fact be rooted in per sonal expe ri ence or obser va tions of oth ers' slower-than-expected returns of fecun dity follow ing con tra cep tive dis con tin u a tion (Bell et al. 2023).
Although pre vi ous reviews have gen er ally con cluded that oneyear preg nancy rates fol low ing ces sa tion of con tra cep tion are sim i lar across a range of con tra cep tive types (Girum and Wasie 2018;Mansour et al. 2011), recent stud ies from highincome countries have indi cated that some con tra cep tives might impact fecun dity, espe cially in the short term. A 2020 study by Yland et al. (2020) using pro spec tive cohort data col lected in Denmark and North America found tran sient delays in return of fecun dity among women who stopped use of oral con tra cep tives, the con tra cep tive ring, and some longact ing revers ible con tra cep tives com pared with bar rier meth ods, with the larg est decreases in fecundability among inject able and patch users. Importantly-and in con trast to prior stud ies-the authors employed a timetopreg nancy study design for esti mat ing fecundability, or the prob a bil ity of con cep tion per men strual cycle, which is recommended to assess bio logic fer til ity in a pop u la tion (Joffe et al. 2005).
A key ques tion is the extent to which the results from the Yland et al. (2020) study, which was conducted among indi vid u als plan ning a preg nancy in Denmark and North America, gen er al ize to women in low-and mid dle-income countries (LMICs) given sev eral key dif fer ences in the con tra cep tive and fer til ity land scapes between high-income countries and LMICs. First, con tra cep tive for mu la tions, which refer to the types of active ingre di ents and doses found in hor monal meth ods, are not uni form across set tings (Sitruk-Ware et al. 2013). These for mu la tions are linked with dif fer ent mech a nisms of action and rates of metabolization in the body that may influ ence the return of fer til ity fol low ing dis con tin u a tion. Second, there may be dif fer ences in the sociodemographic char ac ter is tics (e.g., age, par ity) or life course stages asso ci ated with method pref er ences and use across set tings. These con text-spe cific dif fer ences in user pro files may limit the exter nal validity of stud ies conducted in high-income countries.
Third, stud ies from high-income countries have mostly focused on pat terns of fer til ity fol low ing oral con tra cep tive (pill) use (Barnhart and Schreiber 2009;Farrow et al. 2002), and the lim ited stud ies incor po rat ing users of the con tra cep tive inject able or implant have been based on few study par tic i pants (Yland et al. 2020). This lat ter lim i ta tion is espe cially concerning given the rap idly increas ing num bers of women in LMICs who use inject ables and implants (Adetunji 2011;Anglewicz et al. 2019). Fourth, there are geo graphic dif fer ences in the bur den of infer til ity, with higher prev a lence of both pri mary and sec ond ary infer til ity in LMICs than in high-income coun tries (Mascarenhas et al. 2012.) Reasons for these dif fer ences are not clear but may relate to dif fer ences in expo sure to untreated repro duc tive tract infec tions (Larsen 2000), HIV infec tion (Gemmill et al. 2018), postabor tion com pli ca tions, and injuries or infec tions caused or aggra vated by child birth.
To date, one study by Barden-O'Fallon and col leagues (2021) eval u ated the return of fecun dity among West and East Afri can pop u la tions and found that the 12-month prob a bil ity of preg nancy was low est among those who had discontinued a hor monal method in order to become preg nant. The study, which used sin gle-dec re ment life tables, was able to explore dif fer ences in these pat terns by type of method discon tinued, age, and par ity but did not adjust for other known risk fac tors that might influ ence fecundability, such as socio eco nomic sta tus, part ner ship sta tus, and health Contraceptive Method Use and Return of Fecundity con di tions and behav iors. This study also did not com pre hen sively describe poten tial shortterm reduc tions in fecun dity, which may be enough to dis suade women from using more effec tive meth ods (Barden-O'Fallon 2005).
The lim ited prior research on the topic of con tra cep tive use and return of fer til ity, as well as dif fer ing fer til ity con texts between the Global North and South, makes a com pel ling case for conducting a sys tem atic eval u a tion in LMICs. While there are var i ous ways to study fecundability in pop u la tions, the field of epi de mi ol ogy has made great strides in inves ti gat ing and iden ti fy ing fac tors that impact indi vid u als' or cou ples' abil ity to become preg nant using mul ti var i able-adjusted time-to-preg nancy study designs (Joffe et al. 2005). This meth od o log i cal approach, how ever, is rarely applied to pop u la tions from LMICs.
Using pooled, pop u la tion-based data from 47 LMICs, the cur rent study employs a ret ro spec tive timetopreg nancy design to rig or ously eval u ate the return of fer til ity among women who discontinue con tra cep tion in order to become preg nant. Our mult i var i able approach accounts for dif fer ing dis tri bu tions of risk fac tors for impaired fer til ity across pop u la tions that have not been fully con sid ered by prior stud ies. This study, there fore, pro vi des urgently needed quan ti ta tive evi dence about methodspe cific impacts of use on return of fecun dity in understudied set tings. Ultimately, such infor ma tion is of par a mount impor tance to poten tially val i date and address-rather than dis miss and ignore-women's con cerns about con tra cep tion and to enhance per son-cen tered coun sel ing and con tra cep tive auton omy (Senderowicz 2020).
---
Methods
---
Data and Measures
We con sid ered all Demographic and Health Surveys (DHS) conducted after 2010 that included a repro duc tive cal en dar mod ule in which women were asked to pro vide reasons for discontinuing a method. If a coun try had more than one sur vey in this period, we used the most recent sur vey. Forty-eight DHSs conducted between 2010 and 2018 met the inclu sion cri te ria; one sur vey (Yemen 2013) was excluded because infor mation on an impor tant covariate, edu ca tion, was not included in pub licly avail able data. Online appen dix Table A1 dis plays a list of all 47 sur veys and cor re spond ing sam ple sizes included in our anal y sis.
DHS cal en dar data are ret ro spec tive month-by-month his to ries cov er ing the fiveyear period prior to the inter view. The cal en dars record women's repro duc tive sta tus in each month; pos si ble states include preg nancy, birth, ter mi na tion, and con tra cep tive use or non use. In any month when a woman reported discontinuing a con tra cep tive method, she was asked why she discontinued. We lim ited our study to women with a his tory of sex ual activ ity who discontinued con tra cep tion because they "wanted to become preg nant" (N = 101,180 obser va tions), which assumes that women in our study are exposed to the risk of preg nancy and are not tak ing delib er ate action to avoid preg nancy.
Calendar data allowed us to deter mine the num ber of cycles (months) postcon tra cep tive dis con tin u a tion it took women to become preg nant or if they were unsuc cess ful dur ing the period of obser va tion. For all obser va tions, timetopreg nancy inter vals began when women discontinued a method to become preg nant. Women were followed until one of the fol low ing end points, which ever occurred first: (1) a preg nancy occurred (based on self-report); (2) a woman began using con tra ceptives again after a period of non use and no observed preg nancy (cen sored); or (3) until three months prior to the inter view (cen sored). This last end point avoids underestimating early preg nan cies at the time of the inter view that are underreported either because women do not yet rec og nize they are preg nant or because women do not yet want to dis close their preg nancy sta tus. Women who may have been in the early stages of preg nancy at the time of the inter view are still included in the study, but they are included as cen sored obser va tions (i.e., in the pop u la tion at risk of pregnancy until three months prior to the inter view). Including all months up to the sur vey inter view does not change the results. We also accounted for the pres ence of lon ger time-to-preg nancy inter vals by cen sor ing all obser va tions at 12 months among those pre sum ably at risk for preg nancy for more than a year.
We imposed sev eral inclu sion/exclu sion cri te ria for our ana lytic sam ple (Figure 1). First, we restricted data to obser va tions for which the month fol low ing con tra cep tive dis con tin u a tion was coded as either "not using" or "preg nancy" (n = 99,965 eli gi ble obser va tions). Second, we excluded obser va tions for which con tra cep tive dis con tin u a tion occurred within the three months prior to the inter view to account for poten tial underrecognition of preg nan cies at the time of the sur vey (n = 3,132). Third, to reduce the threat of recall bias (Bradley et al. 2015), we lim ited our anal y sis to women who discontinued a con tra cep tive in the two years prior to the sur vey, which led to the exclu sion of an addi tional 61,753 obser va tions. In addi tion, if a woman con trib uted more than one eli gi ble obser va tion (n = 465 cases), we used the most recent one, so our unit of anal y sis is women, rather than epi sodes. We also excluded obser va tions reporting less com monly used meth ods such as the female con dom and those using the lac ta tional amen or rhea method (n = 773). Lastly, we excluded those miss ing data on key covariates mea sured in all sur veys (n = 15). The final sam ple size for our main anal y sis com prised 33,827 women attempting preg nancy, representing 25,641 pregnan cies and 128,263 monthly cycles. Because the num ber of eli gi ble women for analy sis for some countries and meth ods was small (i.e., < 100 eli gi ble cases), we pooled data across all sur veys to ensure an ade quate sam ple size for com par ing time-topreg nancy by prior con tra cep tive method used.
Our main inde pen dent var i able-con tra cep tive method discontinued-was cat e go rized by method type. We included meth ods in the anal y sis if at least 500 women in the pooled sam ple reported using that method to ensure an ade quate num ber of method-spe cific obser va tions for anal y sis; meth ods meet ing this cri te rion are the oral con tra cep tive pill, IUD, inject able, male con dom, implant, peri odic absti nence, and with drawal. For anal y sis, we grouped peri odic absti nence and with drawal into a cat e gory of tra di tional meth ods. The sur veys included in our study did not col lect fur ther infor ma tion on what type of pill, IUD, implant, or inject able was used, so we were unable to fur ther dis ag gre gate these meth ods by more spe cific char ac ter is tics (e.g., hor monal vs. cop per IUD, dif fer ent con tra cep tive for mu la tions).
We con sid ered sev eral confounding fac tors for anal y sis that are prob a ble risk factors for impaired fecundity or have been empir i cally asso ci ated with fecundability in prior stud ies. To account for reduced fecundability asso ci ated with age, we included a cat e gor i cal var i able with the fol low ing clas si fi ca tion, which was based on respondents' age at the time of dis con tin u a tion: 15-19, 20-29, 30-34, 35-39, and 40 or older. Information on coi tal fre quency and part ner char ac ter is tics was unavail able. Instead, we used a threecat e gory mea sure of union sta tus that incor po rates whether women were in a polyg y nous union (in a nonpolygynous union, in a polyg y nous union, and not in a union.) Some research has suggested that infer til ity and fecundability are pat terned by socio eco nomic attri butes such as edu ca tion and income (e.g., Schrager et al. 2020). These pat terns do not reflect inher ent bio log i cal dif fer ences across socio eco nomic posi tion but instead are medi ated by behav ioral and life style char ac ter is tics, as well as access to health care over the life course. We there fore included var i ables mea sur ing socio eco nomic posi tion or access to health care that may help reduce the threat of resid ual confounding for risk fac tors cor re lated with impaired fecun dity. The first, edu ca tion, was coded as no edu ca tion, pri mary, sec ond ary, or higher. The sec ond was a mea sure of house hold wealth that was coded according to the DHS wealth quin tile clas si fi ca tion for each coun try based on assets and house hold char ac ter is tics (i.e., poorest to richest; Rutstein and Johnson 2004). We also included a mea sure of urban ver sus rural res i dence based on urban and rural clas si fi ca tions for each coun try.
We included three sex ual and repro duc tive health mea sures that may influ ence fecundability. Parity at the time of con tra cep tive dis con tin u a tion was assessed as a binary var i able (nul lip a rous vs. par ous). As noted ear lier, expo sure to untreated STIs may affect fecun dity. We there fore included a mea sure of STI his tory that was assessed from ques tions ask ing if par tic i pants had an STI or symp toms of an STI (bad-smell ing abnor mal gen i tal dis charge or a gen i tal sore or ulcer) in the 12 months prior to the sur vey; any indi ca tion of an STI or STI symp toms was coded as yes (vs. no indi ca tion). Our third mea sure assessed whether the respon dent reported cor rect knowl edge of the fer tile period dur ing an ovu la tory cycle (yes or no), as this knowl edge could be used to opti mize the chance of preg nancy in each cycle (Capotosto 2021).
Our ana ly ses also included two known risk fac tors for infer til ity, body mass index (BMI) and expo sure to tobacco prod ucts (Rossi et al. 2014). We cal cu lated BMI from weight and height data that were mea sured directly dur ing the sur vey and cat e go rized the mea sure according to the con ven tional WHO clas si fi ca tion of adult under weight (< 18.5), nor mal (18.5-24.9), over weight (25.0-29.9), and obese (≥ 30.0). Our sec ond mea sure was a com pos ite binary indi ca tor for use of tobacco prod ucts, deter mined from sev eral ques tions assessing cig a rette, cigar, and chewing tobacco use mea sured at the time of the sur vey (yes to using at least one prod uct vs. none).
All covariates except for age and par ity were mea sured at the time of the sur vey; age and par ity sta tus corresponded to when the woman discontinued con tra cep tion. All sur veys in our ana ly ses contained the fol low ing mea sures: age, par ity, edu ca tion, urban or rural res i dence, wealth, union sta tus, and knowl edge of the fer tile period. Measures of BMI, recent his tory of an STI, and use of tobacco prod ucts were not avail able for all sur veys. Therefore, in a sen si tiv ity anal y sis, we tested whether our results were robust to a more exten sive set of con found ers in a sub sam ple of countries that had all avail able covariates (n = 9,828).
---
Statistical Analysis
First, we used the Kaplan-Meier method to esti mate sur vival curves and one-year prob a bil i ties of preg nancy sep a rately for each eli gi ble con tra cep tive method. We also cal cu lated median time to preg nancy for each method using the num ber of months when at least 50% of women became preg nant.
Second, we used Cox pro por tional haz ard mod els for dis crete sur vival data to model time to preg nancy and esti mate fecundability ratios (FRs). FRs com pare the odds of becom ing preg nant between the exposed and unex posed groups; an FR less than 1 indi cates that the exposed group (e.g., women discontinuing hor monal methods) expe ri enced decreased odds of preg nancy com pared with the unex posed or ref er ence group (e.g., women discontinuing tra di tional meth ods) within the first year after con tra cep tive dis con tin u a tion. These mod els account for changes in the aver age fecundability of the pop u la tion at risk over time, which result from more fecund women being removed from the risk set in later months. All mod els accounted for right-cen sor ing and included coun try fixed effects to con trol for unob serv able charac ter is tics within each coun try. Tests of proportionality, includ ing visual inspec tion of loglog sur vival plots, showed that the proportionality assump tion was gen er ally upheld.
We assumed that women using tra di tional meth ods or con doms served as appro pri ate coun ter fac tu als for women using meth ods pre vi ously hypoth e sized to affect the return of fecun dity fol low ing dis con tin u a tion, such as hor monal meth ods and IUDs. In the main anal y sis, women using tra di tional meth ods were selected as the Contraceptive Method Use and Return of Fecundity ref er ence cat e gory owing to con cerns that con dom users may dif fer from tra di tional method users with regard to their STI or HIV risk, which could impact time to pregnancy (Gemmill et al. 2018). That said, we also inves ti gated whether infer ences were the same when we used con dom users as the ref er ence group, as this would pro vide addi tional sup port for the idea that hor monal meth ods and IUDs influ ence future fecun dity because of bio log i cal mech a nisms of action.
We conducted sev eral addi tional sen si tiv ity ana ly ses to eval u ate the robust ness of our find ings. First, for users of inject ables, we assumed an addi tional lag of three months to account for the pos si bil ity that women may have received their last injec tion in the month they reported discontinuing the method, and there fore could be fully protected from preg nancy up to three months. Second, as described ear lier, we lim ited our sam ple to sur veys that included the full set of covariates, includ ing BMI and tobacco use, to exam ine whether our results were robust to their inclu sion. Third, we conducted all ana ly ses sep a rately for women aged 40 or older, as any poten tial reductions in fecun dity could be ampli fied for this age group. And finally, we expanded our sam ple to all eli gi ble epi sodes within the entire five-year con tra cep tive cal en dar.
Following prior multicountry DHS stud ies (Bradley and Shiras 2022;Gemmill et al. 2018;Sarnak et al. 2023), we used cus tom weights account ing for com plex sam pling designs to allow each coun try to con trib ute equally to the pooled anal y sis; this approach ensures that results are not weighted more heavily toward sur veys with larger sam ple sizes. Specifically, we mul ti plied the DHS-pro vided sur vey weights by a coun try-spe cific con stant, such that the sam ple of women from each of the 47 coun tries in our anal y sis makes up 1/47th of the pooled sam ple, the der i va tion of which is outlined in detail else where (Bradley and Shiras 2022). As an addi tional robust ness check, we also conducted a jack knife anal y sis to ensure that results were not driven by countries with larger sam ple sizes. Statistics pres ent unweighted ns and weighted per cent ages. Analyses were conducted in Stata 14.0 using the svy suite of com mands. Ethics approval was obtained by the insti tu tions that admin is tered the sur veys, and all ana ly ses used anonymized data bases.
---
Results
Characteristics of the study sam ple are presented in Table 1. The major ity of women were in their 20s (59%), had at least one prior birth (87%), and had at least a pri mary edu ca tion (82%). Most women were in a union, and 8% reported being in a poly g ynous union. A lit tle less than one third of women (29%) reported cor rect knowl edge of the fer tile period. Almost half (49%) of women in the weighted sam ple were from the sub-Saharan Africa region, whereas less than 10% were from either Europe or South Asia.
Descriptive sta tis tics for users of each con tra cep tive method type are also pre sented in Table 1. Women who discontinued inject ables and pills made up 31% and 26% of the weighted sam ple, respec tively. Sixteen per cent of the weighted sam ple discontinued either peri odic absti nence or with drawal (tra di tional meth ods), 13% dis continued con doms, 8% discontinued IUDs, and 6% discontinued implants. There were also sociodemographic and regional dif fer ences by type of con tra cep tive dis continued, which pro vide strong moti va tion for mul ti var i able anal y sis. Figure 2 pres ents Kaplan-Meier sur vival curves of time to preg nancy by type of method discontinued. For ease of com par i son, both pan els include the same ref er ence curve for tra di tional meth ods (com bined peri odic absti nence and with drawal) represented by the black line. The top panel pres ents addi tional curves for the IUD and the pill, and the bot tom panel pres ents addi tional curves for the implant and the inject able. Condoms are not included because they over lap closely with tra di tional meth ods.
Both fig ures dem on strate that users of the IUD, pill, implant, and inject able experi ence lon ger times to preg nancy than users of tra di tional meth ods. These curves are quan ti fied in Table 2, which dis plays the median time to preg nancy (TTP) and 12-month prob a bil i ties of preg nancy observed for each method. The median TTP for tra di tional method and con dom users fol low ing dis con tin u a tion is two months, while median TTP for pill and IUD users is three months. Those using the implant and the inject able expe ri ence a median TTP of four and five months, respec tively. The median TTP for users of the inject able short ens to two months after account ing for a threemonth lag. As evidenced by the Figure 2 curves and Table 2 data, there are also dif fer ences in 12-month prob a bil i ties of preg nancy. Traditional users had the highest prob a bil ity at 91% (95% CI: 89.8, 91.4), followed by women using the con dom (88%; 95% CI: 86.6, 88.6), the pill (87%; 95% CI: 86.0, 87.8), and the IUD (86%; 95% CI: 84.3, 87.7). Women discontinuing inject ables and implants had the low est 12-month prob abil i ties of preg nancy-each at 80% (95% CI: 78.9, 81.0 and 77.3, 82.6, respec tively). Thus, among women discontinuing inject ables or implants in order to become pregnant, approx i ma tely 1 in 5 did not achieve preg nancy in a year, on aver age, com pared with approx i ma tely 1 in 10 women using tra di tional meth ods.
Women aged 40 or older had lon ger median TTPs by con tra cep tive type discontin ued, as well as reduc tions in the 12-month prob a bil ity of preg nancy for all meth ods. Among older women who discontinued tra di tional meth ods, the 12-month prob a bil ity of preg nancy was 81% (95% CI: 75.1, 86.1), which is approx i ma tely 10 per cent age points lower than the prob a bil ity among all women of repro duc tive age who also dis continue tra di tional meth ods; this dif fer ence likely cap tures well-known age-related declines in fecun dity. Notably, 12-month prob a bil i ties of preg nancy were much lower for older women who discontinued either hor monal meth ods or the IUD com pared with all women of repro duc tive age. For exam ple, about 64% (95% CI: 56.9, 69.1) of women aged 40 or older became preg nant within a year fol low ing dis con tin u a tion of inject ables, on aver age, com pared with 80% (95% CI: 78.9, 81.0) among all women of repro duc tive age.
Table 3 pres ents results from a mul ti var i able model that accounts for poten tial dif fer ences in under ly ing fecun dity between women. The base line model adjusts for age, par ity, edu ca tion, urban or rural res i dence, union sta tus, edu ca tion level, and knowl edge of the fer tile period; the model also includes coun try fixed effects. The first col umn in Table 3 employs users of tra di tional meth ods as the ref er ence cat e gory. Compared with these indi vid u als, users of the pill, IUD, inject able, and implant had lower fecundability ratios fol low ing con tra cep tive dis con tin u a tion. The larg est reduc tions in odds occurred among women who used inject ables or implants: 0.41 (95% CI: 0.38, 0.45) and 0.51 (95% CI: 0.45, 0.58), respec tively. Patterns are largely sim i lar when employing con dom users as the ref er ence group (col umn 3), although FRs increase slightly. There were no sig nifi cant dif fer ences in fecundability between con dom users and tra di tional users. Findings remain sim i lar after conducting sev eral sen si tiv ity ana ly ses, with some excep tions. First, after account ing for a threemonth lag for inject able users, we found that the adjusted FR (com pared with tra di tional users) increases from 0.42 (95% CI: 0.38, 0.58) to 0.66 (95% CI: 0.60, 0.72) (not shown). Second, we reran our ana ly ses among a sub set of sur veys that col lected infor ma tion on the full set of covariates (col umn 5 of Table 3) and found that results do not change sub stan tially. Third, as shown in col umn 7, and mirroring our age-spe cific results in Table 2, we find large reduc tions in fecundability ratios for women aged 40 or over by con tracep tive type after adjust ment for covariates. Fourth, when we expand our anal y sis to all eli gi ble epi sodes that occur within five years of the sur vey, we find sim i lar results for all meth ods except for con dom users. Specifically, con dom users have a lower fecundability ratio than tra di tional users that was not observed in our main anal y sis (FR for con dom users is 0.81; 95% CI: 0.76, 0.87). Finally, results do not change after conducting a jack knife anal y sis.
---
Discussion
In this anal y sis using pooled data from 47 LMICs, we found that some con tra ceptive meth ods when used prior to attempting to get preg nant are asso ci ated with tran sient delays in return of fecun dity, with the lon gest delays occur ring among women who discontinued inject ables and implants. These rela tion ships persisted after adjust ment for impor tant con found ers, suggesting that women's con cerns about poten tial shortterm reduc tions in fecun dity fol low ing use of cer tain con tra cep tives are not unfounded.
We acknowl edge that our results can be interpreted dif fer ently by fer til ity researchers. While our find ings show that at least half of women will become preg nant within 2-3 months fol low ing dis con tin u a tion of tra di tional meth ods, con doms, the pill, and the IUD, we see dif fer ent pat terns for inject ables and implants-two meth ods that are widely pro moted and used across LMICs. More impor tantly, because fecun dity is het ero ge neous (Leridon 2007), the median esti ma tes of time to preg nancy presented in Table 2 do not suf fi ciently cap ture how the entire dis tri bu tion of time to preg nancy shifts to the right fol low ing dis con tin u a tion of hor monal meth ods. This dis tri bu tional shift leads to lower 12-month prob a bil i ties of preg nancy for users of hor monal methods than for those who discontinue tra di tional meth ods. These impacts are rarely dis cussed by fam ily plan ning research ers but may lead to notice able dif fer ences within com mu ni ties and social net works (Sedlander et al. 2018). As an exam ple, in a hypo thet i cal pop u la tion of 10,000 women who discontinue inject ables or implants, nearly 2,000 of these women may still not expe ri ence preg nancy one year later. This delay is twice as long as what we would expect from a pop u la tion of 10,000 women who discontinue tra di tional meth ods.
Our study cor rob o rates some, but not all , find ings from Yland et al. (2020), who eval u ated the asso ci a tion between pregravid con tra cep tive use and sub se quent fecundability in Denmark and the United States. Similar to Yland et al., we find that users of inject ables have the lon gest delays in return of fer til ity; both stud ies found aver age or median times-to-preg nancy of about five months (although the range from Yland et al. extended to eight months). However, our study find ings diverge from those of Yland et al. (2020) regard ing other con tra cep tive types. For exam ple, whereas Yland et al. (2020) found that users of IUDs had increased time to pregnancy com pared with users of bar rier meth ods, we do not find this asso ci a tion. Our study also dif fers in that we find sub stan tial reduc tions in fecundability ratios among implant users, whereas this rela tion ship was not appar ent in the Yland et al. (2020) study. These dif fer ences could arise from use of dif fer ent for mu la tions of hor monal con tra cep tives across con texts as well as the larger num ber of implant obser va tions in the cur rent study: n = 1,373 in this study ver sus n = 186 in Yland et al. (2020).
Our study also builds on the find ings of Barden-O'Fallon et al. (2021), which found lower returns to preg nancy by 12 months among women in West and East Africa who discontinued hor monal meth ods. Taken together, these results indi cate that pre vi ous reviews on the topic (Girum and Wasie 2018;Mansour et al. 2011), which suggested no impact, should be urgently updated to incor po rate new evi dence. Moreover, future research should eval u ate the poten tial bio chem i cal or biobehavioral path ways under pin ning these rela tion ships, which so far remain spec u la tive.
Critically, these find ings have impli ca tions for fam ily plan ning pro grams in LMICs. Several global efforts, includ ing FP2030 and the Sustainable Development Goals, empha size increas ing the use of mod ern con tra cep tives. However, these efforts are poten tially at odds with women's con tra cep tive pref er ences and con cerns. Our find ings bol ster the crit i cal need for increased per son-cen tered ness in fam ily plan ning coun sel ing and pro vi sion, in line with wider calls to shift the needle on fam ily plan ning "successes" away from just "use" to max i miz ing auton omy and use of pre ferred meth ods (Senderowicz 2020). More con cretely, our find ings indi cate that the accept abil ity of delayed return of fer til ity should be eval u ated when recommend ing and choos ing con tra cep tive meth ods.
Our study has sev eral strengths. First, we use pop u la tionbased data that allowed us to account for poten tial dif fer ences in pop u la tion com po si tion and under ly ing fecun dity across set tings. Second, our sam ple had a large num ber of obser va tions of women who discontinued inject ables and implants. By con trast, inject able and implant users from the Yland et al. study (2020) represented only 0.5% (n = 94) and 1.0% (n = 186) of par tic i pants, respec tively. Third, there are few stud ies from LMICs that inves ti gate deter mi nants of fecundability and infer til ity. Our use of cal en dar data adds to the lim ited lit er a ture by employing a timetopreg nancy study design most often used in higher resource set tings.
There are also sev eral lim i ta tions to note. First, except for age and par ity, all covari ates were mea sured at the time of the sur vey, not at the time of con tra cep tive dis con tin u a tion. It is unclear if this type of mis clas si fi ca tion might bias our main results, since our mea sure of prior con tra cep tive type used does not suf fer from this same error. Second, we relied on ret ro spec tive cal en dar data, which are sub ject to recall bias and other types of reporting errors (Bradley et al. 2015). In their report assess ing qual ity of the DHS con tra cep tive cal en dar, for exam ple, Bradley and col leagues' (2015) results sug gest worse reporting for events fur ther in the past. To address this con cern, we lim ited our obser va tions to the two years prior to the sur vey, although sen si tiv ity ana ly ses using all five years prior to the sur vey gen er ally yield sim i lar results. Third, for users of inject ables, we did not have data on when women received an injec tion rel a tive to when she reported dis con tin u a tion, although we did include a threemonth lag in our sen si tiv ity ana ly ses. Fourth, owing to data lim i ta tions, we could not dis tin guish between method type for inject ables and IUDs. We note, however, that in low-resource set tings, many IUDs are cop per, rather than hor monal, and some inject ables are for mu lated to pro vide con tra cep tion for one or two rather than three months (e.g., com bined inject ables and NET-EN/EV). While DMPA, which pro vi des three months of pro tec tion, remains the most com mon type of inject able in LMICs, there is some var i a tion in the inject able mix across set tings (Laryea et al. 2016), although this is not welldocumented.
Fifth, because inter view ers could record only one con tra cep tive method per month, dis con tin u a tion of mul ti ple con tra cep tive meth ods (i.e., dual use) is not pos si ble to mea sure. Reports of using tra di tional meth ods like absti nence and with drawal may also suf fer from poor reli abil ity com pared with use of hor monal meth ods (Callahan and Becker 2012). Sixth, the DHS data we used do not include infor ma tion on regu lar sex ual activ ity, part ners, and other mea sures that could influ ence fecundability. Measures of peo ple's under ly ing fecun dity or pro pen sity for infer til ity were also not pos si ble to esti mate. Seventh, we did not include mea sures of sex ual vio lence and inti mate part ner vio lence in our study, even though prior research sug gests that these expe ri ences may influ ence health out comes, includ ing STI trans mis sion (Barber et al. 2018;Campbell 2002;Coker, Sanderson et al. 2000;Coker, Smith et al. 2000).
A final lim i ta tion is that we can not val i date two key assump tions of this study: that women's desire to become preg nant fol low ing con tra cep tive dis con tin u a tion was Contraceptive Method Use and Return of Fecundity sta ble over time and that women were actively try ing to become preg nant over the expo sure period. As noted in prior research, shortterm changes in preg nancy inten tion have been well-documented in sev eral con texts (Sennott and Yeatman 2012;Trinitapoli and Yeatman 2018), and preg nancy ambiv a lence is also com mon (Sennott and Yeatman 2018;Tobey et al. 2020).
---
Conclusion
Many women in LMICs either do not use con tra cep tion or discontinue con tra cep tive meth ods for fear that con tra cep tion will inhibit their future fer til ity. Although return of fecun dity is acknowl edged in the WHO Medical Eligibility Criteria for Contracep tive Use (MEC) (World Health Organization et al. 2018), con tra cep tive coun sel ing pro to cols and tools used in LMICs may not include nuanced infor ma tion about return to fecun dity fol low ing dis con tin u a tion, even though this remains a com mon con cern among women. Furthermore, although the WHO MEC discusses poten tial effects of inject ables on return to fer til ity, there is no men tion of other revers ible meth ods. Our novel find ings on the con tra cep tive implant, in par tic u lar, war rant increased atten tion within the fam ily plan ning com mu nity. While we rec og nize that the pres ent anal ysis has lim i ta tions, we hope our study prompts fur ther research on this his tor i cally overlooked topic.
Ultimately, our results indi cate that delayed return to fecun dity after discontinu ing some hor monal meth ods is a com mon expe ri ence in LMICs, pro vid ing what we believe to be some of the first multicountry evi dence to val i date women's lived experi ences from these regions. Contraceptive coun sel ing pol icy and pro grams, there fore, should con sider inte grat ing this infor ma tion to pro vide a fuller pic ture of the range of timetopreg nancy expe ri ences fol low ing con tra cep tive dis con tin u a tion, espe cially for inject ables and implants. While infor ma tion about poten tial declines in fecun dity is just one cri te rion that may influ ence women's con tra cep tive use, indi vid u als have a right to this knowl edge so that they can make informed choices (Senderowicz 2020). ■ |
Millions of Americans are diagnosed with depression each year, costing billions of dollars. Consequences of depression are detrimental to the sufferer and can affect children and significant others, exemplifying the public health significance of this illness. Little is known about depression among mothers who identify as lesbian, even though they may be at an increased risk. The first aim of the Relationships And Depression In Childbearing LEsbian (RADICLE) Moms study was to determine the prevalence rate of depression in a sample of selfidentified lesbian women with at least one child under 18 years of age. The second aim was to investigate minority stress to determine if higher levels of social support reduce the effects of gay stress on depression symptoms. Recruitment efforts targeted counties in two states that had marriage equality and two that did not. A comprehensive survey including standardized depression and stress scales were utilized for assessments. One-hundred-thirty-one self-identified lesbian mothers responded via an anonymous Internet survey. Results indicate that 8.4% of the sample reported clinically significant levels of depressive symptoms; however, limitations of the sample such as privileged demographics suggest that women in the lesbian mother population at large may experience significantly higher rates of symptoms. After controlling for demographic factors, separate multiple regression analyses were conducted to examine the relationship between depression and social support, gay stress, and general stress. Results show that each | LIST OF TABLES
Table 1:
Ron Stall for their time and support throughout this project. I am extremely grateful for their mentorship in and outside of the classroom and for the opportunity to learn from their knowledge and experience. Their suggestions and guidance helped steer this study down the path of success.
I would like to thank Drs. Nina Markovic, Beth Nolan, and Martha Terry for their generous funding for the project, without which, the study could not have been to this scale. I would also like to thank the mothers of the study who selflessly took the time to complete an uncompensated survey to help an unknown student, while most likely juggling work and children.
Hundreds of business, organizations and individuals posted flyers, sent emails, and shared the study link, to which, I am incredibly grateful. Without their help, the study would not have succeeded. I would like to extend a special thanks to the editor of the New England newspaper Bay Windows for donating advertisement space, which recruited a large portion of the participants. I would also like to thank Tanya Disney, Pallavi Jonnalagadda, Jason Chiu and the students and faculty of the University of Pittsburgh Statistics Consulting Lab for their time and x help with the data analysis, as well as for their patience with my endless questions and short deadlines. Furthermore, I'd like to thank Catherine Boothby for her help designing the study logo.
Lastly, but most importantly, I would like to thank my family and friends for their unconditional support, love and encouragement. I am especially thankful for my wonderful partner, L. Fusco, who stood by me through setbacks, aggravation, and countless hours of missed activities due to my dedication to this project. I love you all! xi 1.0
---
INTRODUCTION
Over 17 million Americans suffer from depression each year (Agency for Healthcare Research and Quality, 2000). Depression, however, does not affect all Americans equally; women experience higher rates than men do (Centers for Disease Control and Prevention [CDC], 2011) and research indicates that lesbian women may suffer more than heterosexual women do (Bradford & Ryan, 1988;Gilman et al., 2001;Harrison, 1996;Sorensen & Roberts, 1997;White & Levinson, 1995). Nevertheless, depression can be variable over the life course so it is important to determine when individuals are most at risk and which factors are most influential.
If specific time points or risk factors can be determined, prevention efforts can be directed upon those and depression levels may be reduced or eliminated.
The postpartum period, defined as the 12 months proceeding childbirth, has been extensively studied for incidence of minor and major depressive episodes (Gavin et al., 2005).
Postpartum depression (PPD) is common and affects nearly 20% of mothers to some degree (Gavin et al., 2005). Lesbian mothers appear to be especially susceptible to PPD (Ross, 2005).
There are numerous reasons why lesbian women likely experience higher rates of PPD than heterosexual women. First, a personal history of depression is the most significant risk factor for PPD, which disproportionately affects lesbian women (Cochran, 2001;Frayne, Nguyen, & Allen, 2009;O'Hara & Swain, 1996;Robertson, Grace, Wallington, & Stewart, 2004;Ross, 2005;Trettin, Moses-Kolko, & Wisner, 2006;van Bussel, Spitz, & Demyttenaere, 2006). Second, low social support is a risk factor of PPD (Frayne et al., 2009;O'Hara & Swain, 1996;Robertson et al., 2004;Ross, 2005;Trettin et al., 2006); studies have found that lesbian women report weaker family ties, whether it be the result of geography or discrimination (Gartrell et al., 1996;Gartrell et al., 2000;Kurdek, 2001;Ross, 2005;Rothblum & Factor, 2001). Furthermore, lesbian women report less support for childbirth from gay and lesbian friends (DeMino, Appleby, & Fisk, 2007;Gartrell et al., 1999;Ross, 2005). These are important because support from family and friends can protect against depression (Khatib, Bhui, & Stansfeld, 2013;Leahy-Warren, McCarthy, & Corcoran, 2012). Third, high amounts of stress and low social status are risk factors for PPD (McFarlane et al, 2005;Meyer & Paul, 2011;Ross, 2005;Trettin et al., 2006). Research indicates that lesbian women experience institutional and medical discrimination (Burgess, Tran, Lee, & van Ryn, 2007;Hatzenbuehler, McLaughlin, Keyes, & Hasin, 2010). This discrimination may reduce the social status of lesbian women, create high levels of stress, and discourage victims from seeking treatment (Cochran, 2001;Crawford, McLeod, Zamboni, & Jordan, 1999;Friedman, 1999;Gartrell et al., 1999;O'Hanlan, Dibble, Hagan, & Davids, 2004;Ross, 2005;Stacey & Biblarz, 2001;Trettin et al., 2006). The evidence suggesting that lesbian women experience higher rates of PPD than heterosexual women is especially salient since one-third of lesbian women give birth (Gates, Badgett, Macomber, & Chambers, 2007).
Maternal depression can adversely affect the health of children and significant others (Ahlström, Skärsäter, & Danielson, 2009;Bulloch, Williams, Lavorato, & Patten, 2009;Campbell, Morgan-Lopez, Cox, & McLoyd, 2009;Fanti & Henrich, 2010;Ishaque, 2009;Santos, Matijasevich, Barros, & Barros, 2010). Children and significant others demonstrate more internalizing behaviors such as depression and anxiety (Ahlström et al., 2009;Bulloch et al., 2009;Fanti & Henrich). Children also experience more externalizing behaviors such as conduct problems and hyperactivity than do children of non-depressed mothers (Fanti & Henrich, 2010).
Research has found that almost one-half of lesbian women want children; nearly 80,000 foster and adopted children live with lesbian, gay, and bisexual (LGB) parents; and two million LGB individuals would like to adopt (Gates et al., 2007). Despite these findings, researchers have not determined the risk or prevalence of PPD in lesbian women (Ross, 2005) or have identified theories or treatment for maternal depression in lesbian women.
The first aim of this project is to determine the prevalence rate of depression in a sample of self-identified lesbian women with at least one child less than 18 years of age. The second aim is to investigate minority stress and determine if higher levels of social support reduce the effects of gay stress on depression symptoms. Finally, recommendations for future research will be proposed.
---
BACKGROUND
Several studies have evaluated maternal mental health in lesbian women (Fulcher, Sutfin, Chan, Scheib, Patterson, 2002;Gartrell et al., 2000;Golombok et al, 2003;Patterson, 2001;Ross, Shapiro, Peterson, & Stewart, 2009;Steele, Goldfinger, & Strike, 2007). Some studies indicate that lesbian mothers are not at a higher risk for depression compared with heterosexual women (Fulcher et al, 2002;Gartrell et al., 2000;Golombok et al, 2003;Patterson, 2001). Caution must be taken, however, when interpreting these results. First, lesbian women experience unique risk and protective factors that may influence depression expression differently than for heterosexual women. Second, although these studies were groundbreaking and extremely important, they contain numerous limitations making a definitive judgment about depression impossible.
The Contemporary Families Study examined maternal mental health in both homosexual and heterosexual women (Fulcher et al., 2002). These researchers, however, utilized a convenience sample, which consisted of predominantly Caucasian, well-educated and wealthy participants, which are characteristics that can protect against depression (Ertel et al., 2011). The researchers also indicated that the children of lesbian mothers reported significantly more contact with non-familial adults than the children of heterosexual mothers. This finding suggests that these lesbian women had larger support systems, which potentially buffered them against depression.
The San Francisco Bay Area Families Study found no significant maternal mental health problems in a sample of 37 families (Patterson, 2001). However, they did not include a heterosexual comparison group. Without a comparison group, it is difficult to make accurate judgments about outcome correlations between heterosexual and homosexual mothers.
Furthermore, over 90% of the sample was Caucasian and most possessed the protective characteristics of higher education and affluence (Ertel et al., 2011). Finally, the small, geographically homogeneous sample was a significant limitation, preventing the detection of a true difference and generalizability of findings.
The National Lesbian Family Study was a longitudinal study that followed lesbian families with a child conceived by donor insemination (Gartrell et al., 2000). This inclusion requirement alone may have biased study results since it is not known how donor insemination could affect study outcomes. In addition, the study only measured mental health on one criterion:
whether or not women had sought counseling. Although this may reveal some information about mental health, it does not detail depressive symptomology or its relationship to lesbian motherhood. Including additional measures of depression would have likely provided a better indication of maternal mental health.
The Avon Longitudinal Study of Parents and Children was an extensive study that included a representative sample of lesbian mothers (Golombok et al., 2003). Results indicated that lesbian mothers were more likely to seek psychological treatment than heterosexual mothers
were, but no more likely to endorse symptoms of depression. Although it is not known why these mothers sought treatment, seeking help is generally an indication of impairment. Consequently, further investigation into this finding is necessary to determine if it is predictive of depression.
Caution must be taken furthermore because this study was conducted in the London. Lesbian, gay, bisexual and transgender (LGBT) individuals in the United Kingdom enjoy more rights and benefits than those in the United States. As a result, although there may be similarities, this is not an equal comparison and researchers may discover different outcomes if the study were to be conducted in the US. Ross et al. (2007) examined perinatal depression among lesbian and bisexual women.
They included a heterosexual comparison group but the sample was burdened by privileged demographics, geographic conformity, and a small sample size, similar to the studies reviewed above. Nevertheless, they found higher mean depression scores among the lesbian and bisexual mothers and concluded that although additional research is necessary, lesbian and bisexual mothers may have higher rates of depression than heterosexual mothers do. (Fitzgerald, 1999;Golombok et al., 2003;Tasker & Golombok, 1995). Thus, the need for research has changed. We now need to explore if, when, and why depression affects lesbian mothers. Researchers need to examine if factors such as social support, general stress, and minority stress influence rates of depression among lesbian mothers.
---
METHOD
The Relationships And Depression In Childbearing LEsbian (RADICLE) Moms Study was developed to explore depression and minority stress. The first aim of the study was to determine the prevalence rate of depression in a sample of self-identified lesbian women with at least one child less than 18 years of age. We hypothesized that lesbian mothers would experience higher rates of depressive symptoms than reported among presumed heterosexual mothers. In addition, we hypothesized that lesbian mothers with at least one child who is birth to 12 months of age would experience higher rates of depressive symptoms than lesbian mothers would without a child who is 0-12 months of age. The second aim of the study was to investigate minority stress and determine if higher levels of social support reduce the effects of gay stress on depression symptoms. We hypothesized that social support would protect lesbian mothers from the negative effects of gay stress thereby reducing depressive symptoms. This analysis was guided by the minority stress model (Meyer, 2003).
The minority stress model predicts that minority stress can negatively affect psychological health but social support can mediate those effects (Meyer, 2003). Therefore, following this model will allow us to examine if there is a relationship between minority stress, social support, and maternal depression. Minority stress has been defined as ". . . culturally sanctioned, categorically ascribed inferior status, social prejudice and discrimination, the impact of these environmental forces on psychological well-being, and consequent readjustment or adaptation" (Brooks, 1981, p. 107). In this study, we classified gay stress as the source of minority stress, as research indicates they are independent stressors (Lewis et al., 2003).
---
PARTICIPANTS
In order to participate in this study, women had to be at least 18 years of age, have children living in the household, and identify as lesbian or gay, or have sex with women only. Bisexual women were not specifically targeted because research indicates that individuals who are bisexual often suffer from higher rates of depression, anxiety, and general stress than individuals who are homosexual or heterosexual; furthermore, they report less social support and poor integration into the gay community (Davis & Wright, 2001;Dobinson, MacDonnell, Hampson, Clipsham, & Chow, 2005;Jorm, Korten, Rodgers, Jacomb, & Christensen, 2002). One-hundredsixty-nine individuals responded; of them, 131 met criteria for study participation (See APPENDIX A for a flow chart of sample reduction to identify lesbian mothers).
---
RECRUITMENT
---
Mailed Recruitment
Recruitment packets included detailed information about the study on University of Pittsburgh letterhead (see APPENDIX B for letter body) and a color flyer (see APPENDIX C). The flyer had tabs at the bottom for participants to detach; the tabs included the study name and the web address. Recruitment packets were mailed to 406 businesses, organizations, and professionals in the towns, cities, and boroughs of Hampshire County, Massachusetts; Windham County, Vermont; DeKalb County, Georgia; and Tompkins County, New York. These counties were chosen because, according to Census data, they are among the top US counties with a high percentage of lesbian residents (Gates & Ost, 2004). Furthermore, Massachusettes and Vermont had marriage equality, whereas, Georgia and New York did not. During the study period, however, New York passed marriage equality. Nevertheless, zip code data indicated that participants from all over the country and one from outside of the country took part in the study so there was little utility in comparing data from Massachusettes and Vermont to Georgia and New York (See APPENDIX D for a map of participant zip codes). Limited resources restricted the study's range, therefore, in order to obtain a significant sample size, the mailed recruitment was concentrated to these areas.
The mailing was conducted during the month of May 2011. Places that specifically cater to or serve the LGBT community were initially identified for the mailing. After those were exhausted, libraries, community centers, childcare centers, free-care clinics, midwifery and parenting services, obstetric/gynecological offices, newspapers, WIC offices, colleges and universities, health departments, LGBT-friendly religious organizations, coffee shops and restaurants were targeted. Twenty-nine packets were returned by the post office, labeled as undeliverable.
---
Emailed and Online Recruitment
In order to save on postal expenses, 321 additional professionals, business, and organizations that serve the counties were sent a recruitment packet by e-mail. Ninty-nine follow-up reponses were received, nearly 100% were positive and indicated that the recipient would hang the flyer or include it in their next e-newsletter or e-blast.
A RADICLE Moms study Facebook page was created and advertisement space was purchased on Facebook (http://www.facebook.com/). Filters were set so the advertisment would target parents who were between the ages of 18 and 60 years. The advertisement included the name of the study, a color picture, and a 20 word description (see APPENDIX E). The estimated reach was 26.3 million people; it was displayed 532,005 times. The advertisement was clicked on 109 times, however, it was not possible to determine if those clicks resulted in a completed survey. On average, the price of each click was one dollar and cost prohibitive for this project.
As such, it was only displayed from June to September 2011.
In addition to many newsletters, Internet forums, online classified advertisements and newspapers, Bay Windows, the largest provider of LGBT news in New England, placed a study advertisement banner on their website free of cost (Bay Windows, 2012). Individuals who clicked the banner were taken to the consent page of the study. The banner was very effective for recruitment; 17% of participants reported learning about the study through the banner advertisement. When prospective participants logged onto the survey site, they were given information about the study; specifically, they were informed of the risks, benefits, confidentiality of data, and contact information of the researchers. They were also told that they would not be compensated for their participation. If they chose to proceed, they were presented with three screening questions. The screening questions asked if they were at least 18 years of age, had at least one child living in the household, and identified as lesbian or gay, or as having sex with women only. Individuals who satisfied the screening requirements were invited to complete the survey, those who did not satisfy the screening requirements were not permitted to view or complete the survey. A SurveyMonkey control feature prevented individuals from accessing the site after they were disqualified or had submitted the survey.
---
PROCEDURE
---
MEASURES
In addition to questions about basic demographic information, participants were given a series of standardized instruments. The questionnaires were selected based on their published high validity and reliability scales.
---
Demographic Information
Participants were asked for their age, race/ethnicity, highest level of education, employment status, income, and physical health status. They were also asked if they were currently in a relationship and if so, the longevity of that relationship. Participants were additionally asked for the number, age, and sex of each child; if the child was conceived while in heterosexual or homosexual relationship; and if the child was biological, adopted or her partner's child. In order to evaluate a history of depression, participants were asked if they have ever been diagnosed or treated for depression from a health care professional, and if one of their first-degree relatives had ever been diagnosis or treated for depression. Finally, for tracking purposes, participants were asked for their zip code, and how they learned about the study (see APPENDIX D for a map of participant zip codes. Maps were created using the website http://batchgeo.com/).
---
Social Support
Social support plays a significant role in health for pregnant women (Blanchard, Hodgson, Gunn, Jesse, & White, 2009;Leahy-Warren et al., 2012;Seguin, Potvin, St-Denis, Loiselle, 1995).
Social support is especially important for sexual minorities (Beals & Peplau, 2005;McLaren, 2009;Wayment & Peplau, 1995). As a result, it was critical to assess the level of social support in the study sample. The Multidimensional Scale of Perceived Social Support (MSPSS) is a 12item questionnaire that utilizes a 7-point Likert scale with a subscale for family, friends, and significant other (Zimet, Dahlem, Zimet, & Farley, 1988). Zimet et al. (1988) labels a romantic partner as a significant other, rather than spouse, indicating that this is a suitable instrument for sexual minorities. Higher scores on the MSPSS indicate greater social support. Psychometric properties were demonstrated by Zimet et al. (1988): Coefficient alpha reliability for the family subscale was 0.87, reliability for the friends subscale was 0.85, and reliability for the significant other subscale was 0.91. Internal reliability and validity were also confirmed by research that included a sample of pregnant women (Zimet, Powell, Farley, Werkman, & Berkoff, 1990).
---
Depression
The shortened form of the Center for Epidemiologic Studies Depression Scale (CES-D) was chosen as a measure of depression symptoms (Cole, Rabin, Smith, & Kaufman, 2004;Radloff, 1977). The short form includes ten questions, instead of 20 as in the original version. The short form significanlty reduced the number of questions that the participants had to answer, a critical factor to consider when participants are not compensated. Scores range from zero to 30; a score of ten or more is generally recognized as a cut point for clinically significant number of depressive symptoms (Andresen, Malmgren, & Carter, 1994;Smarr, 2003). Cronbach's alpha for this form is 0.75 (Cole et al., 2004). The CES-D has been successfully used with other pregnant (Breedlove & Fryzelka, 2011), childbearing (Azur, 2007), and LGB women (Balsam, Lehavot, Beadnell, & Circo, 2010) and therefore the preferred screening instrument.
---
Gay Stress
The Measure of Gay Stress (MOGS) was develped to examine minority stress experienced by gay men and lesbian women (Lewis, Derlega, Berndt, Morris, & Rose, 2001). MOGS is a 60item questionnaire that examines stress in the following areas: family reactions, family and partner, general discrimination, HIV/AIDS, misunderstanding, sexual orientation conflict, violence, friends and family visibility, public visibility, and work discrimination. The questions asked about HIV/AIDS were eliminated due to time restrictions and their limited utility for this project. Higher scores indicate higher levels of gay stress. Relability for the coefficient is strong and ranges from 0.72 to 0.90 (Lewis et al., 2001).
---
General Stress
In order to differentiate between minority stress and general stress, the Perceived Stress Scale Short (PSS-4) was administered (Cohen, Kamarck, & Mermelstein, 1983). The PSS-4 contains four items, which are rated on a five-point Likert scale. The PSS-4 is significantly reduced from the orginal 14 items, thus, saving time for the participants. Higher scores indicate a higher level of percieved stress. The reliability coefficient for this instrument is 0.72 (Cohen et al., 1983).
---
RESULTS
A frequency analysis, descriptive analysis, correlation analysis, and multiple linear regression analysis were conducted. Frequency statistics for participants' demographic characteristics are listed in Table 1. Descriptive statistics, including mean, standard deviation, and range are shown in Table 2. Complete data for the CES-D was obtained from 95 participants. One participant with missing data, however, reported a significant number of depressive symptoms and was included in the analysis. Only participants who answered at least three questions on the PSS-4, at least 47 questions on the MOGS, and at least 11 questions on the MSPSS were included in the analyses; mean substitution was used for missing data (Raaijmakers, 1999). The prevalence rate of clinical depression for this study was 8.4%. Fifteen participants reported having a child aged birth to 12 months. Results indicate that none of these participants were suffering from clinically significant levels of postpartum depression. Correlations between the variables were examined and are listed in Table 3. Social support was negatively correlated to gay stress r(82) = -.40, p < .001, depressive symptoms, r(82) = -.41, p < .001, and general stress r(82) = -.30, p = .003. Depressive symptoms were positively correlated to general stress r(82) = .65, p < .001, and gay stress, r(82) = .27, p = .007.
Gay stress was also positively correlated to general stress r(82) = .19, p = .044. Multiple linear regression analysis was used to examine the effect of general stress, gay stress, and social support on depressive symptoms. Due to the possibility of confounding factors, we controlled for age, income, educational attainment, employment status, health status, personal history of depression, familial history of depression, relationship status and length of relationship. Race was not included in the model since the sample included little racial variability. Responses of "unsure" for familial history of depression were coded as 0.5, although the model was unchanged if they were included with either the "yes" or "no" responses. The overall model was significant, R 2 =.550, F(17, 66)=4.74, p<.001.
Due to the high degree of correlation between MOGS, MSPSS, and PSS-4, each was examined in a separate regression model, controlling for the demographic factors listed above.
Gay stress significantly predicted depression symptoms, b = .05, t(69) = 2.40, p = .019. Gay stress also explained a significant proportion of variance in depression symptoms, R 2 = .34, F(15, 69) = 2.32, p = .010. Social support significantly predicted depression symptoms, b = -.15, t(75) = -3.93, p < .001, and explained a significant proportion of variance in depression symptoms, R 2 = .41, F(15, 75) = 3.44, p < .001. General stress also significantly predicted depression symptoms, b = .80, t(76) = 6.00, p < .001, and explained a significant proportion of variance, R 2 = .51 F(15, 76) = 5.36, p < .001. None of the demographic factors for which we controlled were statistically significant in any of the models.
Stepwise regression analysis was also conducted and results indicated that after controlling for demographic factors, general stress was the most significant predictor of depressive symptoms, b = .85, t(68) = 5.61, p < .001; R 2 = .51, F(1, 68) = 31.44, p < .001. After controlling for demographic factors and general stress, social support was the next and final significant predictor of depressive symptoms, b = -.09, t(67) = -2.33, p = .023; R 2 = .55, F(1, 67) = 5.43, p = .023.
---
DISCUSSION
Researchers have not determined the risk or prevalence of postpartum depression (PPD) among lesbian women (Ross, 2005) nor identified theories or treatment for maternal depression among lesbian women. If, when, and why depression affects lesbian mothers needs to be examined, once known, maternal, partner, and child morbidity may decrease.
The first aim of this study was to determine the prevalence rate of depression in a sample of self-identified lesbian women with at least one child less than 18 years of age. We hypothesized that lesbian mothers would experience higher rates of depressive symptoms than reported among presumed heterosexual mothers. The second aim was to investigate minority stress and determine if higher levels of social support reduce the effects of gay stress on depression symptoms. We hypothesized that social support would protect lesbian mothers from the negative effects of gay stress thereby reducing depressive symptoms.
In this study, 8.4% of participants endorsed sufficient symptoms to indicate a current episode of clinical depression. The national rate of depression for US women varies from 2.6% to 13.9%, depending on age and location (Substance Abuse and Mental Health Services Administration, 2012). Additional research indicates that the rate for US mothers is 10.2% (Ertel et al., 2011). Ertel et al. (2011) found that the rate varied considerably among the sample; of note, being aged 35 years or more, having a college degree, being married, having full-time employment, and having the highest income significantly reduced the rates of depression.
The depression rate for this sample of lesbian mothers is similar to rates for US women and mothers. Since depression is an enormous public health burden, this finding exemplifies the need for future research and treatment targeting this population. The majority of the women in this study were older and enjoyed high incomes, a college education, and full-time employment, which according to Ertel et al. (2011), were among the characteristics of women who reported the lowest levels of depression in their research with US mothers. This suggests that lesbian mothers at large may actually experience higher rates of depressive symptoms. Nevertheless, with nearly 10% of the sample experiencing an episode of clinical depression in the past week, additional research with lesbian mothers is critical.
An additional aim of this study was to examine postpartum depression. However, only fifteen participants reported having a child aged birth to 12 months. Although results indicate that none of these participants were suffering from clinically significant levels of postpartum depression, the sample is too small to draw meaningful conclusions.
An important finding of this study was that almost 40% of the participants reported a previous diagnosis of major depressive disorder. The lifetime prevalence rate for US women is 20.7% (CDC, 2011). This survey did not collect information about the age that the women were diagnosed so it is unknown if the women were childbearing at the time of the diagnosis.
Regardless, this finding indicates the dire need to target lesbian women with interventions to reduce depression.
The second aim of the study was to investigate minority stress and determine if higher levels of social support reduce the effects of gay stress on depression symptoms. We hypothesized that social support would protect lesbian mothers from the negative effects of gay stress thereby reducing depressive symptoms. A correlation analysis indicated that women who reported lower levels of social support reported higher levels of gay stress, depressive symptoms, and general stress. Social support independently predicted depressive symptoms when demographic characteristics were included in linear and stepwise regression analyses. This finding is not surprising and is supported by previous research that identified low social support as a significant risk factor for poor health outcomes (Beals & Peplau, 2005;Blanchard et al., 2009;Seguin et al., 1995;Wayment & Peplau, 1995). It signifies the importance of social support in this minority population and suggests that increasing social support alone may decrease depressive symptoms or the prevalence of gay stress.
Correlation analyses indicated that women who reported higher levels of general stress experienced lower levels of social support and higher levels of depressive symptoms and gay stress. When demographic factors were included, general stress independently predicted depression in linear and stepwise regression analysis. These results are expected and correspond with previous research (McFarlane, et al., 2005;Meyer & Paul, 2011;Ross, 2005;Trettin et al., 2006). Although additional supporting evidence is needed, these findings suggest that targeting lesbian mothers with general stress reduction interventions may decrease their level of depression symptoms.
Correlation analysis indicated that women who reported higher levels of gay stress reported lower levels of social support as well as higher levels of depressive symptoms and general stress, which agrees with the minority stress model (Meyer, 2003). When demographic factors were included, gay stress was independently significant in linear regression analysis to predict depression. Gay stress, however, was not a significant predictor of depression in the stepwise regression analysis. Thus, although this study has limitations that prevent generalizability, results of the stepwise regression analysis provides evidence that the minority stress model may need to be adapted or perhaps discarded to understand depression more fully among lesbian mothers. As a result, not only is additional research needed to test this model but new theory is also needed to guide future research. Consequently, if and when evidence-based theories successfully predict depression among lesbian mothers, interventions can be developed to target and reduce depression symptoms among mothers.
---
LIMITATIONS
This study has a number of limitations. A primary limitation is that the participants' identities and eligibility could not be verified. Additional limitations include sampling bias, selection bias, volunteer bias, and measurement bias. Also, limitations result from the lack of a comparison group, lack of geographic diversity, and the possibility that other confounding factors unknowingly affected the results, including the possible impact of participation by the timing of Hurricane Irene.
Participants were recruited anonymously and on-line. Without the ability to verify identity, it is not possible to confirm eligibility of the participants or the accuracy of their responses. While it is possible that ineligible individuals completed the survey, without financial compensation there was little motivation to do so. Furthermore, the survey included three screening criteria and required 15-20 minutes to complete, thus, due to the lack of incentive, it is not likely that ineligible individuals would have taken the time to complete it.
The majority of the sample was Caucasian, well educated, employed full-time, had household incomes in the higher income brackets, and were 31 years or older, thus indicating sampling bias and a non-representative sample of lesbian mothers that may be suffering from depressive symptoms. Ertel et al. (2011) indicates that being at least 35 years of age, having a college degree, being married, being employed full-time, and having a high income were protective factors against depression. The majority of the participants in this study had most of these protective factors; thus, it is possible that these mothers experienced artificially lower rates of depressive symptoms than did women of the national average.
Sampling bias may have also been present since participants learned about this study by seeing a flyer at a local business, organization, or professional; by seeing an advertisement in an
LGBT newsletter or on Facebook; or by hearing about it from a friend, religious leader, or coworker. Most of these recruitment sources indicate social connections and/or involvement in the LGBT community. Thus, these participants are likely to enjoy larger amounts of social support than others who are more isolated or lacking community involvement. Consequently, these recruitment methods may have missed a large and important portion of the target population.
While every effort was made to recruit diverse women, selection bias may have affected the sample. Recruitment flyers were sent to hundreds of business and organizations of varied demographics, however, the flyer frequently had to be approved by higher management or the board of directors, thus the final distribution decision was theirs. Organizations and businesses that did not specifically cater to the LGBT population may not have believed that the survey suited their organization's mission. Furthermore, the stigma and fear of reprisal attached to the status of sexual minorities may have prevented some individuals from displaying the flyer.
Volunteer bias may have affected the sample. There is a dearth of research with sexual minority samples. Highly educated women may be aware of this and thus be more willing to participate. Likewise, these participants may be researchers themselves and know the importance of expanding the research base with lesbian women. Furthermore, the lack of compensation may have been a deterrent for low-income participants. Individuals with lower incomes may have been experiencing financial and/or time constraints that prevented them from completing the survey.
Items from MOGS were mistakenly not randomized, thereby introducing measurement bias. The questions were categorized by family, friends, work, etc. Investigation of the questions provided no indication that the results would have been significantly different if randomized.
Furthermore, retesting with a small sample did not change the results. Although it is not possible to determine if this error affected survey results, MOGS was the least significant predictor of depression symptoms; thus, if anything, the results were overly conservative. Additionally, the findings for social support and general stress were significant even if gay stress was removed from the analysis.
This study did not include a heterosexual comparison group. The rate of depression was compared to the national rates of depression, which was determined using different methods and instruments. However, due to the validity and reliability of most standard instruments, it is unlikely that these varied collection efforts significantly affected the results. Nationally reported rates of depression include all women, including sexual minorities; therefore, if these minorities have increased rates of depression, their inclusion may have artificially inflated the national average. Nevertheless, since the population of sexual minority women is small, this bias is likely to be insignificant.
The lack of geographic diversity among the participants is also a limitation. The majority of the data were collected from residents of the East Coast; thus, those individuals may have unspecified risk or protective factors that significantly influenced the results. This and the small sample size indicate the need for a large, geographically diverse study population.
Additionally, unknown factors may have influenced participation or responses obtained from this study introducing unidentified confounding. For example, Hurricane Irene may have affected study participation and responses due to its significant damage to the east coast in August of 2011. However, an examination of participation dates and zip codes indicates that only eight surveys were collected from the affected zip codes after the Hurricane ("List of," 2011).
Although it is not possible to determine if those participants were personally affected, the surveys were completed within four months of the Hurricane; thus, it seems unlikely that individuals directly affected would have completed an uncompensated survey. Furthermore, a regression analysis was conducted without the eight participants and there was no difference in the significance level of the model. It is also possible that the response rate was affected by Hurricane Irene. Twenty-four surveys were completed before the Hurricane in the affected zip codes, whereas only eight were completed afterward. However, recruitment efforts began in May 2011, thus, it is expected that participation would taper off as the areas became saturated with flyers; therefore, the effect of the Hurricane on the response rate was likely minimal.
---
FUTURE RESEARCH DIRECTION
Although only a portion of the study population endorsed evidence for a current major depressive episode, nearly 40% have a personal history, treatment, and/or diagnosis of depression from a health care provider. Since a personal history of depression is a risk factor for additional major depressive episodes (Solomon et al., 2005), effectively recognizing and treating depression in this minority population may help to reduce future morbidity and possibly mortality for the sufferer. In the next section, barriers for recognizing and treating depression will be explored and recommendations for future research will be proposed.
---
BARRIERS TO IDENTIFYING AND TREATING DEPRESSION
Barriers to identifying depression and developing treatment programs are widespread and exist on a national, organizational, and individual level. These barriers are common for many women and mothers; however, additional, unique barriers exist for lesbian women and mothers.
The federal government acts as a national barrier since it fails to recommend universal PPD and maternal mental health screenings (Santoro & Peabody, 2010). Without a strong, united, national voice, identification, the most basic and easiest part of treatment, can be disjointed and maternal depression may remain undiagnosed and untreated.
Lesbian women and mothers experience additional national barriers. The lack of federal benefits and marriage equality prevents LGBT individuals from domestic partner benefits and tax benefits. Program such as Consolidated Omnibus Budget Reconciliation Act (COBRA), Family Medical Leave Act (FMLA), Flexible Spending Accounts (FSA), and Health Savings Accounts (HSA) do not apply to couples in domestic partnerships (Human Rights Campaign, n.d.). Some employers, however, do provide some benefits to same-sex couples; nevertheless, the benefits package is often subject to taxes not imposed on heterosexual couples (Badgett, 2007). Regardless, in instances where individual employers provide same-sex partner benefits, some individuals report not using those benefits for fear of "coming out" at work ( There are multiple organizational barriers. In a qualitative study, a physician indicated that other conditions, such as diabetes, are more important to identify and treat than depression (Edge, 2010). Other physicians in the study indicated that they did not feel they had a clearly defined protocol or the education to accurately diagnosis depression. Furthermore, incongruent care and poor communication between physicians were cited as further complications for diagnosing and treating depression.
In addition to the organizational barriers that affect heterosexual women, many lesbian women experience discrimination from the medical community, which may significantly affect mental health diagnosis and treatment (ACOG, 2009;CLGRO, 1997;Steele, Ross Epstein, Strike, & Goldfinger, 2008). Intake paperwork often fails to recognize same-sex relationships and clinical training often omits LGBT-specific issues (CLGRO, 1997). Consequently, research indicates that some lesbian women report anxiety in sharing their sexual orientation with their health care provider for fear that it may affect their care (CLGRO, 1997;McManus, Hunter, & Renn, 2006.) Furthermore, some LGBT individuals avoid seeing physicians who are not specifically LGBT-friendly, which may cause delays in treatment or failure to receive specialty care due to a lack of providers (CLGRO, 1997).
A number of individual barriers affect the diagnosis and treatment of depression in women. During focus groups, one group of researchers found that many mothers did not feel comfortable talking to their doctors about their depressive symptoms (Heneghan, Mercer, & DeLone, 2004). Some even feared that their doctor would report them to social services if they revealed the extent of their problems. Other women indicated that they did not know where to seek help or what treatment was available. On the other hand, women in a different study refused treatment, medication specifically, for fear of physical addiction or adverse health consequences for their breastfeeding infants (Turner, Sharp, Folkes, & Chew-Graham, 2008).
Individual barriers unique to lesbian women are interrelated with national and organizational barriers. Two barriers are repeatedly reported. The first is that lesbian women often choose not to use health care services due to negative past experiences resulting from homophobia (ACOG, 2006;Austin & Irwin, 2010;CLGRO, 1997;Hutchinson, Thompson, & Cederbaum, 2006;McManus et al., 2006); the second is the financial burden of health care, usually due to the lack of health insurance (ACOG, 2006;Austin & Irwin, 2010;CLGRO, 1997;Hutchinson et al., 2006;McManus et al., 2006). These barriers combined with those common to heterosexual women, such as lack of knowledge about illnesses and treatments (Heneghan et al., 2004), indicate that additional research and efforts targeting lesbian women is dire.
Understanding barriers may allow programs to be adapted specifically for lesbian mothers. For example, the financial burden of treatment indicates the need for a low-cost intervention. Research indicates that online cognitive behavioral therapy (CBT) depression treatment programs are efficacious, cost-effective, and acceptable to users (Bowler et al., 2012;Carter, Bell, & Colhoun, 2012;McCrone et al., 2004;Proudfoot et al., 2004). Some research has been conducted using online treatment programs with lesbian women and provides evidence that they may be a promising option for this population (van Brunt, 2009). Regardless, since lesbian women are at an increased risk of depression and the consequences of depression are widespread, additional theories and research are essential.
---
CONCLUSION
Strong evidence suggests that lesbian women suffer from greater rates of depression than do heterosexual women (Bradford & Ryan, 1988;Gilman et al., 2001;Harrison, 1996;Sorensen & Roberts, 1997;White & Levinson, 1995). However, existing research offers conflicting findings about depression outcomes. Much research indicates that lesbian mothers are at no higher risk for developing depression or poor mental health outcomes than their heterosexual counterparts (Fulcher et al., 2002;Gartrell et al., 2000;Patterson, 2001). These findings, however, have limitations that do not account for unique risk factors that may influence the expression of depression in this population. Since depression affects morbidity of the patient and others, including partners and children (Ahlström et al., 2009;Bulloch et al., 2009;Campbell et al., 2009;Fanti & Henrich, 2010;Ishaque, 2009;Santos et al., 2010), it is critical to know prevalence rates of depression, as well as unique risk and protective factors that may influence symptoms among lesbian mothers.
The first aim of the RADICLE Moms Study was to determine the prevalence rate of depression in a sample of self-identified lesbian women with at least one child less than 18 years of age. We hypothesized that lesbian mothers would experience higher rates of depressive symptoms than reported among presumed heterosexual mothers. One-hundred-thirty-one eligible participants completed an anonoymous Internet survey. Results indicate that 8.4% of the sample reported clinically significant levels of depressive symptoms. This rate is similar to that of US women and mothers. However, limitations of the sample such as privileged demographics suggest that women in the lesbian mother population at large may experience significantly higher rates of depressive symptoms. Regardless, nearly 40% of the participants reported a previous diagnosis of major depressive disorder, which is almost double that of the national average for US women (CDC, 2011).
The second aim of the study was to investigate minority stress and determine if higher levels of social support reduce the effects of gay stress on depression symptoms. We hypothesized that social support would protect lesbian mothers from the negative effects of gay stress thereby reducing depressive symptoms. Correlation analysis indicated that women who reported higher levels of social support did have lower levels of gay stress and depression symptoms. Multiple regression analysis provided an independent link between gay stress and depression as well as social support and depression. A high level of general stress was also a significant predictor of depressive symptoms. Although additional supporting evidence is needed, these finding suggest that targeting lesbian mothers with interventions to decrease general stress or gay stress or to increase social support may reduce depression symptoms.
As illustrated with this study, understanding the causes of depression can be very challenging; however, treating it can be similarly difficult. These challenges stem from many national, organizational, and individual level barriers, some that are unique to lesbian women.
Integrating knowledge of these barriers with the findings from the RADICLE Moms Study indicates that an online program may be a possible treatment option for lesbian mothers suffering from depression. Other research has already begun exploring this option (van Brunt, 2009).
LGBT-adapted online programs may overcome the financial burden of treatment, provide a supportive community for mothers, and decrease gay stress experienced from the medical community. Regardless, in order to reduce negative health outcomes for lesbian mothers and their families, further research and theory development on the risk and protective factors of depression is critical.
---
website or include it in a newsletter, e-blast, listsrv, blog, mailing, or similar posting; or even send it to someone else who may be able to help? Little is known about lesbian mothers; I hope that this study will allow us to better understand lesbian motherhood and, in the future, develop support or treatment for mothers who need it. In order to be successful though, I need the largest sample possible. Your support and assistance would be greatly appreciated. Thank you!
---
APPENDIX C
RECRUITMENT FLYER
---
The RADICLE Moms Study
A graduate student at the University of Pittsburgh is conducting a research study to learn more about mood and social support among lesbian mothers.
---
You are eligible for this research study if:
• You are 18 years or older and identify as lesbian • Have a child who is under 18 years of age You can be single, partnered, or married.
If you decide to take part in this study, you would have to fill out an online survey in which we would ask you:
• General background information • Questions about your relationships, life experiences and mood
The study will take no more than 15 minutes and can be completed online at your convenience. You will not receive payment for this study. Your participation is voluntary and completely confidential. |
Unaddressed functional difficulties contribute to disparities in healthy aging. While the Affordable Care Act (ACA) is believed to have reshaped long-term care, little is known on how it has collectively altered the prevalence of older adults with functional difficulties and their use of family and formal care. This study uses nationally representative data from the Health and Retirement Study (2008-2018) to describe racialethnic differences in the prevalence of community-dwelling older adults who had difficulty with, but lacked assistance for, self-care, mobility, and household activities before and after the ACA. Individuals with functional difficulties accounted for about one-third of Black and Hispanic individuals, compared to one-fifth of White people. The prevalence of Black and Hispanic people with functional difficulties lacking corresponding care support was consistently 1.5 times higher than that of White people. Racial-ethnic differences disappeared only for lowincome households where unaddressed difficulties were uniformly high. While formal care quantity was similar, Black and Hispanic people with functional difficulties received nearly 50% more family care than White people. These gaps between White, Black, and Hispanic older adults were persistent over time. These findings suggest that racial-ethnic gaps in aging needs and supports remain despite major health care reforms in the past decade. | Introduction
Healthy aging, 1 a process in which functional health is a key driver, is shaped by a broad set of interrelated sociocultural, economic, and health care-related factors. [2][3][4][5] Many older adults with functional difficulties-including challenges with everyday activities like grocery shopping, dressing, or using the toilet-address these long-term care needs and maintain their independence in the community using uncompensated support from family and friends (hereafter referred to as "family care") or, in some cases, supplemental formal care (paid support for long-term care needs). 6,7 However, functional difficulties and use of supportive services for racial and ethnic minorities can differ greatly from those of White people, leading to divergent aging experiences in the United States. [8][9][10][11][12][13][14] Although Black or Hispanic older adults are at heightened risk for having functional difficulties, they receive less formal care than White people. 10,11 Compared with their White counterparts, they have shorter primary care visits, 15 less annual face time with physicians, 16 and fewer days in hospice, 17 while simultaneously experiencing longer hospitalization, 18 post-acute rehabilitation stays, 19 and in some contexts, greater use of home-and community-based services. 20 While the causes of differences in care quantity are multifactorial and contextual, 21 they are posited to include policy-modifiable factors, such as discrimination, access to care barriers, and other systemic causes. 10,11,22 The myriad of recent US health policies that are believed to have reshaped long-term care 23 may have differentially altered the health of community-dwelling older adults, changing the prevalence of people with functional difficulties and how functional needs are addressed across racial-ethnic lines. One potential policy impact is through Medicaid expansion under the 2010 Affordable Care Act (ACA), which was associated with increased access to formal care for low-income adults under 65 years. 24,25 A recent study found that expansion was associated with an observed 4.4-percentage-point (pp) increase in any long-term care use, and a 3.8-pp increase in home health use. 22 However, Black and Hispanic individuals may have been especially affected, given lower baseline insurance coverage compared with White people. 26,27 As a counter example, although ACA's expansion of Medicaid Home and Community-based Services promoted access to communitybased formal care broadly, 28 it may have been more beneficial to White as opposed to Black or Hispanic individuals, given that White older adults are more likely to reside in communities with more formal care supply. 13 This dynamic of overall, but disparate increases in care use may have been furthered under the ACA's Balanced Incentives Program (BIP), which offered states over $2 billion in enhanced Medicaid matching funds to expand home-and community-based care. While the policy was associated with a 3% increase in daily caregiving in BIP-adopting states, likely due to shifts from nursing to community care, it disproportionately benefited caregivers with higher incomes. 29 This disparity may result from more challenges for lower-income caregivers associated with social determinants of health (eg, housing instability, health literacy, geographic isolation). 29 Beyond Medicaid expansion, several of Medicare's alternative payment programs may have also impacted long-term care use. For instance, both of the Bundled Payments for Care Improvement (BPCI) Initiative and Medicare Shared Savings Program (MSSP) have reduced costly post-acute care use. The BPCI was associated with 0.4% and 0.7% reductions in skilled nursing facilities (SNFs) and inpatient rehabilitation facility care and a 0.2% increase in home health agency services. 30 The MSSP has been associated with fewer discharges to facilities rather than home and shorter SNF stays and home health episodes. 31,32 Given the substitutability of formal and family care, particularly for lower-income individuals, changes in post-acute care use could shift the distribution of formal and family care use. [33][34][35] For instance, Golberstein et al 34 found that a 1-unit relative decrease in the use of home health services results in a more than a half-unit relative increase in informal care hours; this effect was most pronounced among lowerincome families, who may not have adequate resources to privately purchase formal home care and therefore are likelier to replace costlier care with more family care hours. Moreover, if program participants avoid higher-risk, and costlier patients, this could have implications for differences in needs and care by race and ethnicity. Yet, few studies have examined the evolution of long-term care demands and use among older adults, and, in particular, formal and family care patterns in the context of these broad policy shifts. 7 In particular, how large-scale, and widely varying, policy changes with potentially opposing effects have collectively affected persons of White, Black, and Hispanic racial-ethnic backgrounds is unknown. Coe and Werner, 36 for instance, examined the prevalence of people with unaddressed functional difficulties among community-dwelling older adults in 2016 but did not examine changes over time or by race or ethnicity. Van Houtven and colleagues 7 examined care support receipt trends from 2004 to 2016 between White, Black, and Hispanic individuals who were 65 or older with multiple functional difficulties. However, these findings elide potential policy effects on younger, older adults that are expected given policies like Medicaid expansion. Moreover, little is known about shifts on the quantity of care support for functional difficulties-an important dimension for potential inequities.
We aimed to provide a nationally representative picture of unaddressed functional difficulties and corresponding care support among White, Black, and Hispanic communitydwelling older adults in the United States from 2008 through 2018. We used 6 waves of data from the longitudinal Health and Retirement Study (HRS). To proxy for functional difficulties, we used respondents' reported need for help with activities of daily living (ADLs) or instrumental ADLs (IADLs). A lack of care support for a specific functional difficulty was based on respondent report of no receipt of either formal or family care for their corresponding functional difficulty (eg, reported no help with using the toilet if they indicated difficulty with this activity). Next, we measured the weekly overall quantity of hours of care received by people with any functional difficulty, summed across all functional difficulties. For each population, we estimated the prevalence (and changes in prevalence) of (1) people with any functional difficulties, (2) people with functional difficulties lacking corresponding care support, and (3) weekly hours of family and formal care received by people with any functional difficulties. To explore potential impacts of access to insurance on disparities, namely Medicaid before and after expansion and Medicare, we reconducted analyses for populations with incomes below and above eligibility thresholds to the Medicaid program and for populations under and over age 65 years.
---
Data and methods
---
Data and variables
We used the RAND HRS Longitudinal File, 37 a cleaned version of HRS's core data, to examine HRS waves 2008 through 2018. The HRS is a national, biennial panel survey of Americans over the age of 50 and their households that includes data on sociodemographic characteristics, functional difficulties, and help received for each functional difficulty. 38 Our sample consisted of community-dwelling individuals (ie, not in a nursing home) aged 55 years or older in waves 2008 through 2018 of the HRS (98 004 person-waves) who were non-Hispanic White (hereafter, "White"; 63 923 personwaves), non-Hispanic Black (hereafter, "Black; 17 946 person-waves), or Hispanic (7701 person-waves) (Appendix Table S1). We restricted the sample to individuals ages 55 years or older because the HRS refreshes its survey population with cohorts ages 51 and older every 6 years, with only adults 55 years or older consistently included in each wave.
---
Prevalence of people with any functional difficulties
We considered people as having any functional difficulty if they (or a proxy reported them as) having any difficulty with at least 1 of 11 activities due to health or memory problems, or if they did not or could not do at least 1 of those activities. 7 The recall period was the last 2 years or since the last survey wave. These activities were (1) 6 ADLs representing self-care activities and mobility (eating, dressing, bathing, walking, getting into or out of bed, and using the toilet) and (2) 5 IADLs representing household activities (meal preparation, grocery shopping, making phone calls, managing money, and managing medications).
---
Prevalence of people lacking care support for functional difficulties
We categorized a person as someone lacking care support if a respondent or proxy reported at least 1 ADL or IADL difficulty (ie, an individual with any functional difficulty) for which no corresponding assistance was received during the past month; assistance was evaluated as support from a family or formal caregiver or through the use of relevant equipment (ie, device for walking or equipment to help with getting in or out of bed), following approaches taken in prior literature. 36 Given the substantial needs among community-dwelling individuals who can perform tasks with some difficulty, we chose to identify difficulties that were unsupported rather than a failure to complete a specific functional task (which narrowly reflect unmet need). The former offers a broader portrait of community need than the latter and should be interpreted accordingly.
---
Average quantity of care per person per week
We calculated average weekly care hours summed across all 11 ADL or IADL activities, by formal and family care type, that individuals with any functional difficulty reported having received during the past month. Formal care hours included care from an organization, an "institution" employee, a paid helper, or a health care professional. Family care hours included uncompensated care from family (eg, spouse/partner, child, in-laws) and friends.
---
Analysis
We compared the risk-adjusted prevalence in people with functional difficulties and corresponding care support of White, Black, and Hispanic people in the past decade. To avoid discounting differences in outcomes that result from cumulative race-and ethnicity-related disadvantages, including physical health and access to and use of health care, 39,40 we limited risk adjustment to age, age squared, sex, marital status, and children. For each descriptive statistic, we estimated 95% CIs adjusted for survey weighting.
In pooled analyses, we combined all survey waves to estimate the following for White, Black, and Hispanic individuals:
(1) prevalence of people with any functional difficulties, (2) prevalence of people with functional difficulties lacking corresponding care support, and (3) average weekly hours of each of family and formal care. The prevalence for a specific group was calculated by dividing, within each population, the number of people with the outcome (eg, any functional difficulty) by the total sample size. For prevalence estimates (1) and (2), we included data from all respondents. For weekly hours of care, (3), we included data only from respondents with at least 1 functional difficulty. To evaluate insurance coverage differences that could explain care support disparities, we calculated prevalence rates by poverty (≤138% of the Federal Poverty Level [FPL] or not) and age (≥65 y or not). 41 We chose 138% of the FPL because the ACA extended Medicaid eligibility to adults with incomes up to an effective FPL threshold of 138% 42 ; we chose 65 years as a cutoff because Medicare, which impacts access to insurance and health care services, primarily enrolls Americans over age 65.
In cross-sectional trend analyses, we used repeated crosssections instead of pooled data. Trend analyses were otherwise identical to pooled analyses. For further clarity, we showed main results separately for ADL and IADLs.
---
Robustness checks
First, given ambiguities in the characterization of functional limitations and care support, we adopted an alternative method to categorize whether a person lacked care support. This approach allows that individuals may report functional difficulty even with the use of assistive equipment (ie, equipment does not eliminate difficulty with walking or getting in or out of bed). 43 This alternative approach yields a higher prevalence of people lacking care support but similar between-group patterns (Appendix Figure S8).
Second, we examined trends in the prevalence of people with functional difficulties receiving any (regardless of amount of) care support as a way to situate our findings within the existing literature. We found results consistent with findings by Van Houtven and colleagues, 7 suggesting no differences in the receipt of any care between White and Black and Hispanic individuals. These results demonstrate that binary measures (care receipt or not) can belie important divergences in care support (Appendix Figure S9).
Third, we examined trends in the number of functional difficulties among people with any difficulties. We used the number of functional difficulties as a proxy for disability level. 44 Changes in disability levels over time were not statistically significant, but patterns suggest potential decreased severity in the period after the ACA (Appendix Figure S10).
---
Results
From 2008 to 2018, the prevalence of older people with any functional difficulties was higher for Black and Hispanic than White populations. Pooled estimates show that 35 Health Affairs Scholar, 2023, 1(3), 1-8
(Figure 1). There was little evidence that these between-group differences diminished over time. The prevalence of Black and Hispanic people with any functional difficulties was over 30% in each of the survey waves, which was consistently 1.5 times greater than the corresponding prevalence among White people. Similar to functional difficulties, there were between-group differences in corresponding care support (eg, difficulty with but no help received for toileting), with 20.5% (95% CI: 19.1-21.9%) of Black and 20.1% (95% CI: 17.8-22.4%) of Hispanic compared to 12.4% (95% CI: 11.9-12.9%) of White people who lacked corresponding care support for at least 1 reported functional difficulty. Again, these differences across groups were persistent over time (Figure 2).
Next, focusing on people with any functional difficulties, we examined the quantities of family and formal care support, overall (pooled across all waves) and across waves. While all groups received substantial family care amounts, White people received an average of 12.3 (95% CI: 11.6-13.1) compared to 17.3 (95% CI: 15.7-18.9) weekly family care hours among Black people and 20.4 (95% CI: 17.7-23.2) among Hispanic people. In contrast, average weekly formal care hours were statistically indistinguishable, at 1.6 (95% CI: 1.3-1.9) hours for White, 1.8 (95% CI: 1.3-2.4) hours for Black, and 2.1 (95% CI: 1.4-2.8) hours for Hispanic people (Figure 3).
Trend analysis results suggest, for all groups, respective decreases and increases in family and formal care, although these estimates were modest, imprecise, and did not indicate strong patterns or decreases in gaps between groups (Figure 3).
When examined by income (Figure 4) and age (Appendix Figures S2-S5), patterns were generally similar to our main approach except for people in poverty. The prevalence of lower-income people lacking care support was consistently high over time, and higher than for those aged 65 and older (Appendix Figure S5), but rates were statistically indistinguishable between White, Black, and Hispanic populations. Conversely, the overall prevalence was lower among higherincome individuals, but disparities were observed, with higher rates for Blacks and for Hispanics (Figure 4).
---
Discussion
In this nationally representative study, we found that, throughout the period from 2008 to 2018, the prevalence of Black and Hispanic older people with functional difficulties and lacking corresponding care support was consistently around 50% higher than that of White people. More than 30% of older Black and Hispanic compared to 20% of older White people had 1 or more difficulties with functional tasks involving self-care, mobility, or household activities. One-fifth of Black or Hispanic compared to just over onetenth of White older adults lacked corresponding care support. These gaps persisted despite 40-65% greater amounts of family care and similar amounts of formal care among Black and Hispanic compared to White people. For lowerincome individuals, the prevalence of unaddressed functional tasks was consistently high across all racial-ethnic subgroups. Our findings imply that, collectively, recent US health care reforms were not associated with reductions in racial-ethnic gaps in functional health and care needs.
Prior analyses have documented large differences in functional support across racial-ethnic groups. For instance, 15% and 21% of Black and Hispanic compared to 9% of White individuals ages 70 years and older have unaddressed self-care or mobility needs. 45 Edwards and colleagues 43 estimated a 22% higher prevalence of people lacking care support among Black compared to White older women with cognitive impairment during the period 2000-2014. Our investigation confirms these racial-ethnic differences as well as their persistence into the period after the ACA's implementation. Together, these findings suggest that the gaps were resistant to broad policy changes, even though clinical and long-term care expansion under Medicaid and novel payment systems in Medicare might be expected to predominantly affect those with the least access to high-quality clinical and supportive services, with no evidence of gains over time for lower-income populations eligible for Medicaid expansion. Together, these findings suggest that the gaps were resistant to broad policy changes, even though clinical and longterm care expansion under Medicaid and novel payment systems in Medicare might be expected to predominantly affect those with the least access to high-quality clinical and supportive services. Cross-pressures of the programs may have had a cancelling effect. For instance, Medicaid expansion could have increased formal home health care while potentially lowering the use of family care, even as the BIP shifted care back from formal to family care. The findings also reveal persistent unaddressed difficulties among lower-income individuals that surpass those of older age groups, regardless of race; this suggests critical needs in the future that were not addressed despite widespread incentives for improved care access in Medicaid.
Despite the greater prevalence of people with functional difficulties and more difficulties per person, hours of formal care received by Black and Hispanic older adults only modestly (and not statistically significantly) increased in the post-ACA period. These small changes in formal care receipt are in line with other literature showing an approximately 4-pp increase in the likelihood of any formal long-term care use among older adults following Medicaid expansion. 7,24 While our study was not designed to assess causal implications of Medicaid expansion or other policy changes, our descriptive findings showed no evidence of shifts across groups in the prevalence of people lacking care support among low-income households from before to after the ACA, despite marginal increases in formal care. Thus, to close racial-ethnic gaps, more concerted and broadly targeted policy efforts to direct and tailor formal care equitably are still needed.
Family care, typically outside the scope of health care reform, remained the dominant care type among Black and Hispanic people with functional difficulties. The higher levels of family care received by Black and Hispanic relative to White older adults could be due to differing cultures of family and formal caregiving in the presence of functional needs. 9,10,46,47 As an example, the notion that family wellbeing is more important than that of the individual among some Hispanic caregivers has been posited as 1 reason for the low use of formal home care. 48,49 This heavy reliance on family care could also reflect continued struggles faced by racial-ethnic minorities in accessing high-quality formal care. For instance, the 2022 National Healthcare Quality and Disparities Report shows that Black and Hispanic people had worse outcomes than White people for the majority of access-to-care measures. 50 Moreover, racial-ethnic minorities tend to be overrepresented in low-quality nursing homes and home health agencies, 51,52 which may decrease the appeal of formal over family care. 10 The more limited use of nursing home care may further explain the persistently higher prevalence of functional need among community-dwelling racial-ethnic minorities, and resulting equity issues in unaddressed needs. Other potential factors explaining discrepancies in family care use are the "opportunity costs," which are higher for higher-versus lower-income families, or greater care needs due to social determinants of health (costs related to housing instability, transportation, home repair needs, and lack of supply of formal care providers due to geographic isolation or other factors). 29 How such significant reliance on family care among racial-ethnic minorities affects disparities in care support is unclear. On one hand, extensive use of family care among racial-ethnic minorities may reduce the prevalence of people lacking care support, potentially mitigating inequities. On the other hand, family care may be an imperfect substitute for formal care, especially as severity of needs increases. 53 Alternative payment models that disincentivize the use of institutional post-acute and other formal care are increasingly prevalent, 54,55 potentially increasing the prevalence of community-dwelling people with functional difficulties and raising pressures on family care. 56 For racial-ethnic minorities who are already particularly more likely to have lower household and community resources than White people, these systemic changes exacerbate concerns about the adequacy of family care in meeting growing care support needs. 57 Finally, heavy reliance on family care by racial-ethnic minorities may introduce inequities for caregivers in the future, 58 since caregiving often comes with mental, physical, and financial hardships. [59][60][61] Future work should examine how formal and family care can be leveraged to decrease population-level differences in functional difficulty, paying particular attention to how needs are being addressed across disparate groups.
---
Limitations
Our study has several limitations. First, we used repeated cross-sections in trend analyses without adjusting for multiple observations per person. While consistent with approaches in prior literature, 7 it may not represent the true degree of statistical uncertainty. Second, while the HRS collects race and ethnicity data, only Black and Hispanic individuals are oversampled, leading to small samples for other racial groups. Therefore, by focusing on groups with larger samples, our study does not capture experiences of other populations, limiting the policy implications of our findings.
Third, the 2018 HRS had skip-pattern issues that prevented some respondents from being asked ADL questions. 37 Although the HRS imputed the missing data, there may still be missing data bias. 62 Fourth, our measure of unaddressed functional limitations does not capture unmet need for assistance with ADL/ IADLs. The HRS does not ask respondents whether they were unable to complete a task due to lack of assistance or whether, despite receiving help, they remain unable to complete the task. For this reason, our estimates may overstate unmet need (defined as inability to complete tasks) and instead offer broader evidence of incompletely addressed needs. However, any resultant bias is likely consistent over time and should not vary by race and ethnicity.
---
Conclusion
Even with large-scale policy changes brought forth by the ACA, this descriptive analysis provided evidence that Black and Hispanic older adults living in the community were still more likely than their White counterparts to experience functional difficulty and lack care support. Lower-income individuals, in particular, showed evidence of substantial needs unaddressed by caregivers. Despite a greater prevalence of people with difficulties as well as more difficulties, the quantity of formal care used by Black and Hispanic older adults did not meaningfully increase relative to White people in a period that included multiple health care reforms that could impact long-term care use. Our descriptive analyses should encourage policymakers and health groups to systematically identify, understand, and address policy-modifiable disparities.
---
Supplementary material
Supplementary material is available at Health Affairs Scholar online.
---
Conflicts of interest
The authors have no conflicts of interest to declare. Please see ICMJE form(s) for author conflicts of interest. These have been provided as supplementary materials. |
This paper addresses, from a socio-legal perspective, the question of the significance of law for the treatment, care and the end-of-life decision making for patients with chronic disorders of consciousness. We use the phrase 'chronic disorders of consciousness' as an umbrella term to refer to severely brain-injured patients in prolonged comas, vegetative or minimally conscious states. Based on an analysis of interviews with family members of patients with chronic disorders of consciousness, we explore the images of law that were drawn upon and invoked by these family members when negotiating the situation of their relatives, including, in some cases, the ending of their lives. By examining 'legal consciousness' in this way (an admittedly confusing term in the context of this study,) we offer a distinctly sociological contribution to the question of how law matters in this particular domain of social life. | INTRODUCTION*
This paper examines the topic of chronic disorders of consciousness from a legal perspective. Our intentions underlying this deceptively simple opening statement, however, require some elaboration: what do we mean by these key phrases 'chronic disorders of consciousness' and 'legal perspective'? The phrase 'chronic disorders of consciousness' is an umbrella term referring to severely brain-injured patients in prolonged comas, vegetative 1 or minimally conscious states. 2 The second term -'a legal perspective' -requires a little more unpacking. Legal scholarship is now a already have been imagined. For the paper offers an empirical analysis of the 'legal consciousness' of the family members of patients with chronic disorders of consciousness. 'Legal consciousness' -an admittedly confusing term in the context of this paper about disorders of consciousness -is a term of art within the sociology of law 9 that is much wider in its focus than the medical conditions explored in this paper. 'Legal consciousness' comprises society's constructions of legality -the cultural characterisations of legality that are common currency and drawn upon when, as individuals and groups, we make sense of everyday life. 10 To study legal consciousness is to study the background assumptions about legality that structure and inform routine thoughts and actions. 11 An empirical focus on legal consciousness, then, like much legal research, involves an enquiry into the role of law in society -but not law as expounded by the courts or legal personnel, rather 'law' as constructed by society in various cultural 'narratives' of legality, as they are sometimes described. 12 This paper, accordingly, focuses on the images of law that were drawn upon and invoked by family members when negotiating the situation of their relatives with chronic disorders of consciousness, including, in some cases, the ending of their lives. In this way, we present a study of law in the everyday lives 13 of ordinary people enduring extraordinary circumstances, thus offering a distinctly sociological contribution to the question of how law matters in this particular domain of social life.
The paper proceeds in four stages. First, to provide some background and context, we offer a brief overview of the legal regulation in England and Wales (where our study mostly took place) of the treatment, care and ending of lives of patients with chronic disorders of consciousness. Secondly, we give an introduction to our data set and describe the research methods used to obtain it. Thirdly, we present our research findings. And fourthly, we then discuss them from the perspective of legal consciousness, before concluding by exploring the wider implications of our analysis for this field of medical care and considering what research agenda they suggest.
---
THE LAW
The treatment of patients with chronic disorders of consciousness is, like all medical treatment, subject to the standards of care developed in the general law of 9. See eg M Hertogh 'A "European" conception of legal consciousness: rediscovering Eugen Ehrlich' (2004) 31 J Legal Stud 455; S Silbey 'After legal consciousness' (2005) 1 Ann Rev L & Soc Sci 323; M Kurkchiyan 'Perceptions of law and social order: a cross-national comparison of collective legal consciousness' (2011) 29 Wis Int'l L J 102. 10. SE Merry Getting Justice and Getting Even: Legal Consciousness among Working-Class Americans (Chicago: The University of Chicago Press, 1990); A Sarat ' ". . . the law is all over:" power, resistance, and the legal consciousness of the welfare poor ' (1990) negligence. 14 More specifically, however, treatment decision making is governed by legislation dealing with situations in which individuals lack the capacity to make decisions for themselves. The Mental Capacity Act 2005 is the statute in force in England and Wales that sets out a legal framework for determining mental capacity and for decision making on behalf of those over 16 years old who lack the capacity to make decisions for themselves. 15 Patients with catastrophic brain injuries leading to disorders of consciousness clearly lack such capacity and, under the Act, the senior clinician with treating responsibility (usually the consultant) therefore becomes the decision maker for such patients. The only exceptions to this would be the rare circumstances 16 in which a patient has elected in advance to refuse consent to certain treatments by way of a legally valid and applicable Advance Decision; 17 or the patient has granted a Health and Welfare Lasting Power of Attorney to someone so that they can give or withhold consent to treatments; 18 or the court has appointed a Welfare Deputy with the power to give or withhold consent 19 (though a Welfare Deputy does not have the power to refuse life-sustaining treatment). 20 Contrary to popular belief, 21 then, the term 'next of kin' has no legal status in England and Wales and does not grant any decision making power over an incapacitated patient. Family members, although not the responsible decision makers, must, however, be given the opportunity for involvement in decision making regarding their loved one's care and treatment. Clinicians have a duty to consult with the patient's family in order to inform decisions in the 'best interests' of the patient. 22 Although most medical treatment decisions can be taken simply as a result of discussions between the clinicians and family and friends (and/or official advocates), the decision making process is more involved in relation to 'serious medical treatment', 23 including the withholding or withdrawal of artificial nutrition and hydration 14. See generally MA Jones Medical Negligence (London: Sweet & Maxwell, 2008). 15. Lack of mental capacity is defined in ss 2 and 3. 16. Currently, only 4% of the population of England and Wales reports having made an Advance Decision and only 4% reports having appointed anyone as their Health and Welfare Lasting Power of Attorney (YouGov 2013; http://www.compassionindying.org.uk/knowledge -end-life-rights-and-choices-yougov-poll2013, accessed 15 January 2014). In G v E [2010] EWCA 2512, J Baker elaborated the principle underpinning the statutory provisions regarding deputies: 'the words of s16(4) are clear. They do not permit the court to appoint deputies simply because "it feels confident it can" but only when satisfied that the circumstances and the decisions which will fall to be taken will be more appropriately taken by a deputy or deputies rather than by a court, bearing in mind the principle that decisions by the courts are to be preferred to decisions by deputies' (para 61). 17. Mental Capacity Act 2005, s 24. 18. Ibid, ss 9-11. Note, however, that, under s 11(8) an attorney can only refuse life-sustaining treatment if the grant of the power of attorney expressly provides for this. 19. Ibid, s 16. 20. Ibid, s 20. 21. Albeit wrongly, 48% of people believe that they have the legal right to make medical decisions on behalf of an adult family member who lacks capacity to make decisions for themselves; 22% did not know whether they had this legal right or not; only 22% answered correctly that they did not have this right (YouGov 2013; http://www.compassionindying .org.uk/knowledge-end-life-rights-and-choices-yougov-poll2013, accessed 15 January 2014). (ANH) from a patient in a permanent vegetative (or minimally conscious) state. All such decisions must be brought to the Court of Protection for an exercise of its declaratory power under the Act. 24 Under s 15, the Court of Protection may make declarations as to the 'lawfulness or otherwise of any act done, or yet to be done' in relation to a person who lacks capacity. The issue for the court in such cases is, strictly speaking, whether it is in the patient's best interests to give treatment rather than to withhold or withdraw treatment, given that the jurisdiction of the court is to grant treatment consent where the patient is incapable of doing so him/herself: If the treatment is not in his best interests, the court will not be able to give its consent on his behalf and it will follow that it will be lawful to withhold or withdraw it. 25 The crucial test of 'best interests' (which marks a contrast to some other jurisdictions where the 'substituted judgment' of the patient is key) 26 is not defined in the Act. Rather, the Act gives a checklist of factors that must be considered when working out what is in a person's best interests. 27 In addition to clinical considerations, these include taking into account the patient's prior expressed values, wishes and beliefsfor example, what the patient would have wanted for him/herself. This is why consultation with family (and friends) is essential to inform any 'best interests' decision. The role of families, as defined by the Act, is to provide information about the person before the loss of capacity -his or her character, beliefs, values and what his or her wishes might be about treatment and care decisions. This information contributes to the court's 28 'best interest' decision, but does not determine it. In a fairly recent (controversial) 29 judgment, 30 even a united family view that the relative would not want to be kept alive in a minimally conscious state was insufficient to tip the balance in favour of withdrawal of treatment when set against other factors, including the value of preserving life, which weighed particularly heavily.
---
Mental Capacity
---
THE RESEARCH PROJECT
The research reported here is part of a larger ongoing project conducted by the York-Cardiff Chronic Disorders of Consciousness Research Centre. We draw here on a data set of more than 50 family members who have experience of a catastrophically brain-injured relative in a chronic disorder of consciousness. Ethical approval for the study as a whole was obtained from the Universities of York and Cardiff ethics committees. In-depth semi-structured, audio-recorded, interviews were carried out (between 2010 and 2013) by Celia Kitzinger and Jenny Kitzinger, and then transcribed and anonymised before being shared with other members of the research team. 31 Participants were recruited through advertising via brain-injury support groups and websites, and through social networks (the two interviewers have a severely braininjured sister), through contacts made after giving formal presentations about our research and via care homes and snowball sampling. The study subsequently received NHS approval (from Berkshire Research Ethics Committee, REC reference number: 12/SC/0495) and we were also able to recruit via consultants, although all interviews took place off NHS premises (generally in people's homes). Interviews were mostly one-to-one, but occasionally in pairs (e.g. a husband and wife asked to be interviewed together, as did a mother and daughter). Interviewees were mostly parents, siblings, spouses/partners and adult children of the patient. Most patients were currently either PVS or MCS (some had died by the time of interview; others had emerged from chronic disorders of consciousness with severe neurological deficits).
The recruitment methods used clearly do not result in a sample representative of all families with severely brain-injured members. Although the pool of interviewees 32 shows considerable variation in terms of age, gender, ethnicity, and cultural and economic capital, 33 we can make no claim as to the representativeness of our sample in relation to the sampling frame (i.e. all those with relatives with a chronic disorder of consciousness). Equally, as a purely qualitative study, we can make no claim as to the distribution of various legal consciousness narratives amongst those with relatives with chronic disorders of consciousness. Nor do we claim that legality was the dominant theme of our interviews. Interviews were often quite long and involved discussion of many issues. Indeed, in some interviews issues of legality were not canvassed positively at all. But although the study of legal unconsciousness, as it were, could be as important and revealing as that of consciousness, 34 our focus here is on the interviewees who positively communicated perceptions of legality. This permits us to discover the significance of legal consciousness for the thoughts and actions of these interviewees in relation to their relatives. This kind of grounded analysis, in turn, allows us inductively to hypothesise about the potential significance of legal consciousness more widely in this domain -findings that can be tested and refined in further work. In other words, our data set allows us to build theory about the potential significance of law for chronic disorders of consciousness. Given how little research 31. Further anonymising (including reassigning pseudonyms -and occasionally altering identifying details, e.g. gender of speakers/patient or the cause of the injury) became necessary at the point at which presentations and publications were prepared. The challenges of avoiding 'jigsaw identification' of participants across our publications and of maintaining the confidentiality of those whose stories may also be in the public domain following court hearings and media interest is discussed in B Saunders, C Kitzinger and J Kitzinger 'Anonymising interviews for data sharing: the practical research ethics of protecting participant identities' European Sociological Association Conference, Turin, Italy, 2013. 32. For more information about interviewees (and patients) represented by the sample, see ibid. exists with families of these patients, and that this analysis is the first to explore legal consciousness in this field, we believe that the theory-building in this paper is a very important step.
The conduct and analysis of the interviews that gives rise to our findings followed the broad methodological trend within legal consciousness work. 35 Where possible, the direct questioning of interviewees about law was avoided. Instead, the focus was upon the characterisations of legality that emerge naturalistically in the ways in which participants discuss their lives and actions generally, or certain topics specifically. Thus, our interviewees, in discussing the situation of their relatives with chronic disorders of consciousness and the approaches taken to their care, treatment and death, revealed their assumptions about legality -assumptions that informed their views and actions. It is these assumptions -aspects of broader cultural narratives of legalitythat are the focus of our analysis. It is to this analysis that we now turn.
---
RESEARCH FINDINGS
The clearest and perhaps most obvious finding from our data -one, no doubt, that can be confidently projected inductively on to all those who find themselves in these circumstances -is that the experience of a relative suffering a severe brain injury is a shocking one that propels family members into a state of great uncertainty. Neurology is a complex field of medicine, beyond the ken of most laypeople. As in many fields of medicine (perhaps more so than many), family members found themselves initially entirely dependent on the expertise of medical staff. For example, one interviewee, Kim (all names of people and places are pseudonyms), noted that: . . . when we started this, I was such an innocent and if somebody had said to me 'Right, do we operate or don't we? Do we put him into intensive care or don't we?' . . . I wouldn't have actually known. I was very much in the hands of the professionals . . . Another interviewee, Gill, expressed a similar sentiment: you rely on these people who are at the top of their fields to make these decisions and so you trust them.
Gill's statement about the inevitability of initial trust reflects a common assertion within the broader sociological literature on trust. The medical system with which family members find themselves having to engage is, in Giddens' terms, an 'expert system' 36 -perhaps, indeed, the expert system, par excellence. It is opaque and confusing for most laypeople. Trust in the medical knowledge and expertise of trained staff is the antidote to the initial sense of uncertainty felt by families. 37 As Sztompka has noted in relation to the role of trust in contemporary society: More often than ever before we have to act in the dark, as if facing a huge black box, on the proper functioning of which our needs and interests increasingly depend. Trust becomes an indispensable strategy to deal with the opaqueness of our social environment. Without trust we would be paralysed and unable to act. 38 However, this initial trust can be short-lived. Indeed, such was the case for a number of our interviewees. There are, we suggest, several features associated with chronic disorders of consciousness that reduce the likelihood of initial trust in medical expertise enduring undiminished. For a start, there are the limits of medical expertise in relation to severe brain injury, and associated levels of uncertainty about outcome in the early time period. From the outset, clinicians may thus explicitly inform families of the limits of medical knowledge about, and ability to intervene in, severe brain injury, making comments such as 'time will tell' and 'we have to wait and see'. Secondly, medicine may be implicated as a cause of the disorder of consciousness (e.g. resulting from surgery going wrong or 'half-successful' efforts at resuscitation). Over time, there may also be concerns from family members (and tensions between families and funders/care providers) about what care and rehabilitation can be provided, or what options are available to relieve suffering and distress, or indeed, to allow death. 39 Another distinct feature of disorders of consciousness that may have a profound implication for trust in medical expertise is the fact that patients have little or no ability to communicate about anything they may be experiencing. This means that family members become involved in interpreting non-verbal signs as part of the wider process of deciding how best to care for the patient and what is in the patient's ultimate interests. And, of course, family members draw from their long-term and intimate knowledge of their relative, including their sense of what their relative would have wanted -knowledge that medical staff do not share. In a significant reversal of the trust-expertise dynamic, many of the family members we interviewed felt that they were the experts about the patients' experiences of treatment and care and that the medical staff should trust them, not the other way round. 40 Gill, for example, who, above, noted her initial trust in the medical staff caring for her partner, Oscar, very poignantly described her ultimate lack of trust in the staff's understanding of him:
It was really bugging me that they were just sedating him and not actually going to the root of the problem. And because they were saying, 'we are neurology nurses, we know seizures when we see them'. And I said . . . 'you may know neurology patients and you may know seizures, but I know Oscar. And he's not having seizures.' Likewise, Sarah raised this issue in relation to her family member's treatment -a young man supposed to be 'vegetative' (and thus without any awareness of himself or his environment):
38. P Sztompka Trust: A Sociological Theory (Cambridge, UK: Cambridge University Press, 1999) p 13. 39. The fact, for example, that death for a permanently vegetative patient may only be possible/allowed via the withdrawal of artificial nutrition and hydration is one potential source of tension between family members and care providers. 40. In this way, our interviewees' individual assertions of expertise match similar communal assertions arising from organised political/social movements. See eg S Epstein Impure Science: AIDS, Activism and the Politics of Knowledge (Berkeley, CA: University of California Press, 1996).
They would do all of these things but not look at Ricky and see that he was in pain because, 'Oh, he can't feel pain.'And when they tell you that and you know that you've seen it, you think, 'My God, how can you?'Again, you're back to what your nightmares are made of. How can you trust? You've got to entrust them with them and you've got to walk away and leave them.
Sarah's statements point to the intimate relationship between trust and risk, another key theme in the sociology of trust. Trust is usually depicted as a strategy for managing risk. 41 However, the flip-side of this dynamic, as we can see here, is that where trust diminishes one is left with the sense of risk. Where family members no longer trusted medical staff to fully understand their relative, our interviewees perceived their ongoing care as a source of risk -risk that their best interests would not be promoted and that avoidable suffering would be endured. Perhaps inevitably, this often became a source of tension between family members and clinicians. It is in the negotiations of these tensions that we gain our first insight into the significance of legal consciousness. The first of three key images of legality that form part of different legal consciousness narratives becomes apparent in this context.
---
LAW AS SWORD
A key theme to emerge from our data set is that family members felt embattled in relation to the care of their relatives. Many felt they had to fight to achieve what, in their view, was best for their loved ones. Hugo, for example, described a process of struggle in relation to his wife: It's been a long, long battle trying to find the right thing . . . I won't say all the things get resolved but . . . generally . . . it's made a bit better . . . It is a long . . . daily process of just making sure that everything is okay.
Likewise, Elspeth described the difficulties her family experienced in trying to secure her brother's transfer to another hospital where they felt his needs would be better catered for: That's how we got into St Peter's Hospital basically. It's from resources and arguing and bullying, basically, just by not letting up. And then also by having friends who are doctors who knew someone who knew someone.
It is important to note that most interviewees experienced the medical system as a powerful one, and many perceived themselves as having a comparative lack of power in relation to it. Some interviewees, indeed, felt belittled by their experiences. Tracy, for example, expressed this sentiment forcefully: I sit here, I'm like a witch sitting here and now, what can I do next? How do I handle these people? They are dreadful. They are so precious. They are so territorial. They are so -because, you see, they think we're dog shit . . . Equally, Elspeth, in discussing her anger about her family's struggle for the care of her brother, expressed her sense of not having been respected: Int: Do you think any of that anger was legitimate as opposed to just expressing pain? Are there people you ought to have been angry with?
Elspeth: Angry at doctors who didn't listen to us and treated us like idiots . . . Yeah, angry at doctors for not listening.
However, as we saw above in relation to Elspeth, some interviewees, despite their frustrations, felt they had the resources, skills and determination to challenge decisions that were being made by medical staff. But others turned to law in order to redress the power imbalance. Tracy, for example, sought legal assistance when she felt that medical staff were not being sufficiently open with her about what had caused the minimal consciousness of her partner, Trevor: I phoned [the solicitors] and they said to me 'it sounds like either there was a mistake made . . . or maybe they did everything they could and we're just really, really unfortunate. But when you get him out of Estridge Hospital get his records, send them down to me and I'll look at them and I'll tell you what the story is.' Likewise, Gill expressed her faith in law to mitigate a power imbalance. In her case, she was worried that life-sustaining treatment might be denied to her badly injured partner and turned to the law to try to prevent this: . . . you can take [disagreements about care decisions] to your lawyer, you can get your lawyer involved. I tell you what, that put them on edge . . . I said . . . 'I'm just going to go and run this past my lawyer.' (laughs) Do you know, like I was feeling threatened, so I was using her as my power.
For people like Gill and Tracy, legality acts like a sword against the power of the medical system. Locked in a struggle over the welfare of their relatives, law can be turned to in order to alter power relations and influence decision making about treatment and care. Gill's decision to phone a human rights organisation for legal help suggests a perception of law as a weapon of justice and as representing some kind of higher normative order. Likewise, when Tracy described her turn to law as way of trying to prevent poor treatment happening to other patients in the future, legality is being imagined as a means of combatting unfairness and injustice in the medical system. Yet Tracy's interview displays some ambivalence about the nature of legality as a weapon, suggesting it can be something of a double-edged sword. She additionally framed her turn to law, not as a way of promoting general fairness or respect for human rights but, rather, merely as a way of securing damages under the law of negligence so that she could pay for better care for her partner, Trevor. Equally, she contradicted her earlier concern for the situation of future patients: my brother in law Karl, he's a lovely fellow but he tends to get emotional, and he was saying 'this is a dreadful situation, there's . . . people out there who need this [information]'. And I was going 'yeah, Karl that's for another day, okay? Let's concentrate on Trevor. I don't give a fuck about those other people to be quite honest right now.'
Here, the image of legality, albeit temporarily and perhaps to a limited extent, is disconnected from justice and becomes merely a powerful means to an end. Law is less an expression of a collective justice, and more a tactical weapon to be wielded instrumentally in a personal struggle.
---
LAW AS SHIELD
Both Gill and Tracy were speaking to the interviewers within a year or two of their partner's initial injury, and both believed that the partner would recover from their chronic disorder of consciousness and were fighting to keep them alive and to secure the conditions that would help them improve. Not all of our interviewees however, shared this belief. A number of them believed that their relatives' disorder of consciousness was irreversible and that it was in their best interests to die (even if they had earlier taken a view more like Gill's or Tracy's). For some of these interviewees, the 'wait and see' period was long over and they no longer hoped for recovery as the means of 'release' but saw death as the only way forward. 42 Here, at least for some, legality could act as a shield. Lillian and Kim were two such interviewees. Both had applied to the Court of Protection in England and Wales for a declaration that it would be lawful to have artificial nutrition and hydration withdrawn from their relative, resulting in their deaths. For both, the fact that this decision was being made by a court was highly significant and helped to protect them from feelings of responsibility. Lillian's relative had suffered severe brain injury after an operation. Before that operation, he had written a letter expressing his desire to refuse life-sustaining treatment if he was rendered incapable of making a decision for himself. However, this letter did not meet all the criteria necessary to be compliant with the requirements of an Advanced Decision under the Mental Capacity Act 2005. 43 Nonetheless, the Court of Protection, in part informed by this letter, made a declaration that it was lawful to allow artificial nutrition and hydration to be withdrawn. The court's jurisdiction to decide this was highly significant to Lillian: I think the fact that he'd written what he wrote helped you cope with it in your head. Because otherwise it would feel like it was more your decision . . . If the Court of Protection wasn't there to say, 'Well we are making the ultimate decision and this is what we decide', I would always feel that it was me who'd actually chosen to do it . . . almost feeling that you'd sentenced them to death . . . So the Court of Protection has shielded me from that experience.
---
Kim expressed similar sentiments in relation to the Court of Protection's decision about her son:
Kim: If you make a decision to end a life and then . . . somebody changes their mind . . . if you don't have a court ruling for it, then the ensuing recriminations could destroy a family . . . Whereas if . . . you're thinking 'we decided this was a good idea, it then went through due process of law and it was looked at by someone who had no emotional involvement at all . . .' then it takes the . . . guilt . . . out of it.
Int: Did it feel like it was the court's decision, not yours ultimately? Kim: Exactly.Yes. And that's what I was told. 'You're not deciding to end your son's life. You are posing a question that the judge will then answer for you' . . . Here, the image of legality, encapsulated in the Court of Protection's jurisdiction, is one of impartiality -'no emotional involvement', as Kim put it. Both Kim and Lillian were relieved to be able to pass this life-and-death decision up to a higher decision making forum. In relation to both Kim and Lillian, the image of law as a kind of shield was invoked in relation to their own feared sense of individual responsibility. However, Kim's reference to 'due process of law' connects with a wider sense of law as a shield that was also evident in our data. In other interviews, the benefits of due legal process were referred to in relation to society protecting its members from reckless decision making in relation to those with chronic disorders of consciousness. Here, legality is an impartial and appropriate form of authoritative collective regulation that merits compliance and respect. A number of interviewees displayed considerable deference to formal state legality in this regard. Jim is one such example. Jim's interview is of particular interest because his sister, who had been diagnosed as being in a permanent vegetative state, had been killed by his mother after delays in the legal process of being granted permission to withdraw artificial nutrition and hydration. A doctor appointed by the Official Solicitor to give a second opinion on Jim's sister's condition had raised the possibility that she might be in a chronic minimally conscious state, rather than in a permanent vegetative state: When that last . . . report was produced by Doctor Smithers that clearly was going to delay proceedings further . . . it was clearly not reaching a conclusive situation, because Doctor Smithers' report was just going to kick it all into touch again. So then . . . my mother had reached some decision in her mind and she borrowed someone's insulin, and there was a [hospital] car that was adapted for taking a wheelchair. You can borrow it and sign it out . . . And you have to have somebody else with you, a third person to look after them in the back of the car . . . But she . . . didn't fill in the form or ask for anyone else to come along -she just went and asked for the keys and they gave them to her (laughs). She was so well known there, and it's a failure of procedure, but anyway. She took the car, took her home, and they were both found dead later on that day. (In accordance with our anonymising strategy outlined in n 31, certain details have been changed) Yet, despite Jim's support for his mother's decision, he was nonetheless very deferential to legality:
These are momentous decisions that need to be done by a disinterested, authoritative and experienced party . . . If you have a legal system, you can't have people taking the law into their own hands . . . I still support the fact you've got to have a system, you've got to stick to it.
The deferential approach that Jim displayed towards legality can be significant for other family members contributing to decisions about whether and how to allow their relatives to die. John is one such example. John and other members of his family had asked the NHS Trust to apply to the Court of Protection for a declaration that it would be lawful to withdraw artificial nutrition and hydration from his wife who was in a permanent vegetative state. He was critical of the fact that his wife's consultant had not made the possibility of this legal process known to him. He had learned about it, instead, from a television programme. At the time of interview, the court hearing had been provisionally scheduled for a couple of months' time. Once involved in the application process, however, his sense of it was that it was a procedural formality with a foregone conclusion:
Int: What do you think will be the outcome of the case? Do you have any sense of whether it would be approved or not? John: Well I think it will be, because the Official Solicitor's involved, so I think the legal bods will get together and they'll say it's a no contest this and they'll just nod it through . . . One reason why it might be a little more protracted than normal is they want to hit it on the bounce in that they don't want it to go to court and then come back and more work to be have to be done. In John's view, the decision to allow a dignified end to his wife's life had taken too long. And he acknowledged the role of law in lengthening that process:
Of course when you get the legal people involved, everything then slows down for whatever reason . . . We've got all the reports from the senior clinicians and then the legal people have got to put that into legal speak to present it to the court . . . It might take quite a long time before it gets heard by the judge . . . it'll be a year since we started the process . . . and it'll be . . . four years since she's been in this state. So it's taken five years.
Yet, despite the fact that the Court of Protection proceedings that caused further delay to his wife's death was a legal requirement, 44 and despite this process being a matter of 'nodding it through', he was still deferential to legality and was content to see due process complied with:
Fundamentally, I think it should go to court . . . At the moment, the clinicians know what the law says so they abide by it which means that we have to go through this long protracted process . . . There need to be safeguards and safety checks. Not all our interviewees however, displayed this kind of deference to law in light of its role in protecting society from reckless decision making. For others, law was not a shield in this sense but, rather, was an illegitimate barrier to ending what they perceived to be their relatives' indignity or suffering. Like Jim's mother, above, some of our interviewees were willing to subvert the legal process. It is to these data that we now turn.
---
LAW AS BARRIER
Within our data set, in addition to deference towards legality, there was also, conversely, considerable scepticism towards the legal regulation of the ending of patients' lives. Death can only be allowed through the withdrawal/withholding of treatment that may, in some circumstances, include the withdrawal/withholding of artificial nutrition or hydration. However, where a patient has a diagnosis of being in a permanent minimally conscious state (MCS) rather than permanent vegetative state (PVS), the courts have not so far been willing to grant permission for the withdrawal of artificial nutrition and hydration. 45 In this context, some of our interviewees expressed a willingness to kill their relative themselves -either because they thought a court would not allow their relative to die (because of an MCS diagnosis) and/or because they viewed the legally permissible route of ANH withdrawal to be intolerable and lacking in compassion. But for such people, the act of killing would not involve 'taking the law into their own hands' as Jim, above, put it. Rather, it would be a legitimate subversion of legality because of its illegitimacy in this context. This orientation towards legality involves an external and critical stance. Rather than showing deference to law because of its authoritative status in regulating collective matters, some interviewees observed law from the outside, as it were, and critiqued it for its normative failures. This was expressed in terms of law failing to meet acceptable moral standards. Elspeth, for example, discussed the situation of her brother, Ian, who had been diagnosed as MCS. Although Ian eventually died of natural causes, prior to this Elspeth had been planning to kill him: Elspeth: . . . he was in so much pain, breathing really difficult and I said 'Ian, I just wish there was something I could do.' And he again leant out and looked at me. And to me that meant 'there is something you can fucking do' . . . And it just -it suddenly became really clear that that's what we had to do is to help him do that. And when we went to see the lawyer and it basically looked impossible [to win a case for ANH withdrawal], having rationalised it and realised that this was the best thing for him, I was personally wanting to take his life illegally . . . Int: But you'd have faced a prison sentence.
---
Airedale NHS Trust v Bland
Elspeth: Yeah, but that would be not as bad as his sentence . . . If we know that something is the right decision to do then it wouldn't be something I'd question massively afterwards . . . Indeed, for Elspeth, not only was the law morally wrong in relation to the ending of her brother's life; it was an inappropriate intrusion into this domain:
There needs to be an option somehow. I mean, obviously you can't just be, 'Hey he's MCS, let's off him.' But I don't think it should be a legal thing. I think it should be down to the doctors with the carers and it should be a multidisciplinary thing. I don't think it is a legal matter.
A similar scepticism towards legality was expressed by Sarah. Like Jim, her family member had been killed by another relative. Unlike Jim, however, Sarah showed little deference towards the law in this regard. Indeed, she had been willing to do the same: I always said I don't think it'd be wrong, but I am not going to let that system get hold of me. I couldn't deal with prison. I mean I'd end up probably killing someone else as well (laughs) and be truly a murderer . . . In some cases, they say it's wrong what Mary did. Well, morally it isn't wrong . . . And I don't care what the law says. Sarah's reference to being 'truly a murderer' suggests that, from her perspective, formal legality has only a contingent relationship to justice. The legal category of murder is only 'true' when it corresponds to a higher normative order. Where it doesn't, then the law, lacking legitimacy in this respect, can be ignored and subverted.
---
DISCUSSION
What should we make of these different images of legality identified from our interviews? Two potential preliminary objections to our analysis should be anticipated and countered before the discussion ensues. First, it is tempting, perhaps, to view our analysis not as one of law itself but, rather, as one of attitudes to law. One quick response to such a concern would be to note that attitudes to law -the focus of much socio-legal work over the years 46 -are as worthy of study within the legal academy as legal doctrine itself. However, the deeper point of the legal consciousness literature is that if we are concerned with the rule of law in society, we cannot avoid examining how legality is constructed by society. For the societal constructions of legality away from the formal sites of law -law in everyday life, in other wordsconstitutes the rule of law in an important sense. As Ewick and Silbey have noted in an interview about their legal consciousness research:
The law is what people do about the law. We said that people's engagement with the law in their lives was an ongoing construction of relations. Law was just a flavor to any social relation. So in order to understand the rule of law we had to find its place in ordinary social relations. So the question was not only 'what do people do?' It was also 'what is the rule of law?' 47 The study of legal consciousness, therefore, is as much the study of law as is the study of legal doctrine. In this way, our paper makes a novel contribution to our wider understanding of how law matters in the care of those with chronic disorders of consciousness.
Secondly, it might be objected that an analysis of individuals' attitudes and actions does not sustain the claim that we are studying legal consciousness narratives as such -society's constructions of legality. Our response here is that, although individuals have attitudes, attitudes are not individual. Orientations towards legality are social rather than individual. The contribution of the legal consciousness literature has been to highlight the ways in which legality is socially constructed in society. And such constructions are not infinite. The work of both Ewick and Silbey 48 and Halliday and Morgan 49 (discussed below) contends that the constructions of legality within society are limited and capable of structured analysis.
It is important to stress again at this point that our aim in this paper is to build theory about the significance of legal consciousness for the thoughts and actions of families going through this situation. We use our data to make links between the wider body of literature on legal consciousness and the experience of having a relative with a chronic disorder of consciousness. By interrogating our interviews for images of legality, we can open up a dialogue between this data and broader legal consciousness theory. And in turn, legal consciousness theory, applied to and tested in our data, can deepen our insights into the experience of responding to the severe brain injury of a partner, spouse, offspring, parent or sibling. What we will show in the following section is that the images of law as sword, shield and barrier can be interpreted meaningfully through the lens of existing legal consciousness theory. And in light of these connections, we can hypothesise about the wider role of legal consciousness in this domain of social life.
---
LEGAL CONSCIOUSNESS THEORY
Ewick and Silbey proposed an influential typology of legal consciousness 'narratives', as they put it -separate characterisations of law in society which, they suggested, are drawn upon and reproduced in a routine fashion in commonplace lives. 50 They name these three narratives according to the characteristic orientation towards law implicit in the narrative: (1) 'before the law'; (2) 'with the law'; and (3) 'against the law'. Each narrative has a double face, as it were, representing both a characteristic individual response to law and a cultural schema that make sense of law at a structural level. Standing 'before the law' captures an image of law as ensuring collective fairness, equality and justice. Playing 'with the law', by way of contrast, is a story where law is a morally neutral game that can be played to individuals' advantage if they are clever enough and have the right resources. Being (up) 'against the law' tells yet another story of law where it is the expression of brute power, exercised unpredictably and resisted by individuals where cracks in that power appear (though no attempt is made to alter the power structures themselves).
Halliday and Morgan 51 have recently mapped Ewick and Silbey's typology on to a broader analytical framework derived from Mary Douglas' grid-group cultural theory. 52 Ewick and Silbey's three narratives correspond largely to three of the four 'cultural biases' suggested by Douglas. However, this mapping exercise revealed that a fourth narrative, corresponding to the fourth cultural bias in Douglas' scheme, is missing from Ewick and Silbey's account of legal consciousness. Halliday and Morgan applied the fourth cultural bias to the topic of legal consciousness in a study of radical environmental activism. Within this narrative of legality (which they term 'collective dissent'), state law is similarly regarded as illegitimate and oppressive, but is resisted and subverted in a collective effort to alter the power structures that legality imposes.
Existing scholarship on legal consciousness, then, offers us four core cultural characterisations of legality: 53 (1) before the law; (2) with the law; (3) against the law; (4) collective dissent. Our suggestion is that the images of legality revealed in our data are rooted in these four cultural narratives.
---
Before the law
The image of law as both a sword and a shield connects largely with Ewick and Silbey's 'before the law' narrative of legality. The idea of law being a powerful weapon of justice to counteract and call to account the failings of the medical system is part of the story of law as 'a general, objective and impartial power', 54 as Ewick and Silbey put it. Gill's sense that formal law was protective of human rights, and Tracy's instinct to invoke law to protect the interests of others, correspond to the story of law as a reified system of justice. As Ewick and Silbey noted: Individuals' decisions to mobilize the law thus often involved the crucial interpretive move of framing a situation in terms of some public, or at least general, set of interests. 55 This same interpretive move is seen in the image of law as a shield, protecting society from reckless decision making about the ending of lives. Despite John's criticisms of the slowness of the legal process in relation to his wife's case, and despite Jim's sympathy for his mother's killing of his sister, both portrayed Court of Protection proceedings as an essential system for collective protection. The Court of Protection offered 'safeguards' and a demonstration that the issue would be 'dealt with in a balanced way' (John), and represented 'disinterestedness, authority and experience' (Jim). In their view, the court was the proper place for decisions about the ending of lives. In this narrative of legality, law deserves respect and compliance from deferential subjects.
---
Against the law
Yet, in contrast to the deferential orientation of subjects invoking the 'before the law' narrative, the image of law as an illegitimate barrier connects with Ewick and Silbey's 'against the law' portrayal of legality, where the power of law is resisted. In contrast to Tracy's and Gill's faith in the normative qualities of legality, whereby it can 'protect [relatives'] needs' (Gill) or prevent doctors from 'doing the same again [to others]' (Tracy), the interviews with Elspeth and Sarah reveal considerable scepticism towards it in this domain. For them, given the court's reluctance in the case of W v M and Others 56 to permit withdrawal of ANH in relation to a minimally conscious patient, the requirement of legal proceedings was a problematic obstacle to, and unwelcome intrusion into, the resolution of their families' suffering. As such, the demands of due legal process did not merit compliance or respect but, rather, invited avoidance and resistance -'the legal thing, that's what's made me most angry' (Elspeth); 'I've got absolute contempt for [this legal system]' (Sarah). And despite the power of law to criminalise and punish -indeed, perhaps because of it -we saw such an act of resistance in the action of Jim's mother in her dual 'mercy killing'/suicide. As Ewick and Silbey have noted:
Resistant acts are almost always opportunistic, dependent upon a crack or opening in the face of power . . . [T]hese acts are often practiced to escape, rather than change, a structure of power . . . 57 Jim's mother identified such a crack in the face of power -allowing her to take her daughter out of the hospital unaccompanied and thus to administer a lethal dose of insulin. And in killing herself with her daughter, Jim's mother performed the ultimate act of escaping law's power -albeit at the cost of her life.
---
With the law
Although we suggested above that the image of law as a sword is part of the 'before the law' narrative, it can also connect with Ewick and Silbey's 'with the law' narrative of legality, as we saw in relation to Tracy's interview. Here, formal law is disconnected from justice and is portrayed simply as a resource that may be harnessed tactically for individual gain. Tracy's ambivalence about the character and promise of law hints at this more instrumental and profane story of legality. In the midst of a harrowing and lengthy struggle with a powerful medical system, it is not hard to imagine such a narrative of legality being invoked.
---
Collective dissent
The interview with Sarah reveals that the act of killing a loved one may be more than an act of escape from, or avoidance of, law's power, as was the case with Jim's mother. Sarah contrasted herself with her family member who had killed their loved one:
I know what a crusader Mary can be. She always has been. She might want to make a crusade of this issue with Ricky, but I don't. I mean I'm not saying I don't, but not with my life in prison, thank you . . . I could not have dealt with them getting hold of me. So I would've done it sneakily.
Whereas Sarah was inclined, like Jim's mother, to escape the power of law, others may act in order to change the power of law in this domain for the benefit of all those who may suffer similarly. In this contrast between Sarah and Mary, we can see a glimpse of Halliday and Morgan's fourth narrative of legal consciousness -what they call 'collective dissent'. 58 Here, the authority of state law is rejected and critiqued in the name of some kind of group interest. The act of killing a loved one, then, may be prefigurative or be part and parcel of a wider collective voice of dissent against the power of law in this domain.
---
CONCLUSION
This paper has offered a socio-legal analysis of chronic disorders of consciousness. Like other aspects of society that have been studied through a legal consciousness lens (such as social welfare, 59 workplace relations, 60 sexuality, 61 prostitution, 62 radical activism 63 and, indeed, everyday life generally 64 ), we have demonstrated the pertinence of legal consciousness theory for legal study in this field. If, as legal scholars, we wish to understand the significance of law for the treatment of those with chronic disorders of consciousness, then we must study legal consciousness as much as we study legal doctrine. But what are the wider implications of our analysis for this field of medical care, and what research agenda do they suggest?
It is trite to stress that law surrounds and permeates this field. From above, as it were, the law seeks to regulate who gets to make treatment decisions when patients lack capacity. Further, when it is thought that the withdrawal of treatment may be in the patient's best interests, the law ultimately reserves that judgement for itself (through the medium of the Court of Protection). From below, the law is invoked by individual family members to protect their relatives' interests and to call to account a powerful medical system. One of the main insights of our analysis is that the power of law to achieve these objectives depends on legal consciousness. And it is the legal consciousness of two key groups who stand either side of the injured individualfamily members and health professionals -that is central here.
In relation to family members, our data revealed that legal consciousness can undermine the capacity of law to control end-of-life decisions. They could equally undermine its capacity to support, empower or vindicate family members who feel aggrieved about the medical treatment provided to their relatives. The key questions here are about the conditions under which different narratives become salient for family members at key moments. Individuals will not sustain a single narrative in relation to all aspects of their lives. We should expect people to display a certain amount of incoherence in their legal consciousness. 65 Equally, individuals will not sustain a single narrative over time. We should expect a certain amount of inconsistency in this respect. Although the reasons for individual orientations towards law may be complex and, in some situations, beyond full explanation, the impact of key events or interventions will, in many situations, be capable of analysis. What role, for example, do legal advisors or advice networks have here? Equally, what potential do support groups have to foster a collective sense of agency to try to alter the power structures of law in this domain, and how might such a sense of agency be lost? Some social movements, such as the civil rights movement in the USA, have been successful in challenging the power structures of law. 66 Equally, some movements in the health domain have been successful in challenging the domain of medical expertise. 67 However, at the same time, as Halliday and Morgan have argued, 68 there is a significant empirical dynamic between the sense of collective agency within such activism and the individual sense of fatalism characteristic of the 'against the law' narrative. Empirically, collective initiatives and organisations are vulnerable to failure. Individuals who were once energised as part of a group effort to challenge the power of law can be vulnerable to shifts towards a more isolated sense of fatalism where symbolic acts of resistance rather than collective struggles are more common. We should expect such in this context too. But what are the particular features of this context that militate for or against collective agency? Longitudinal research that could reveal such dynamics would be very useful here.
Of course, our analysis of family members' accounts serves to highlight an important gap in our understanding and the need to include medical staff in future legal consciousness research. For when family members invoke the law against medical staff, its power to influence those staff depends on their legal consciousness. In what ways, for example, is law resisted, deferred to or played with in medical decision making around chronic disorders of consciousness? If staff draw upon a 'with the law' consciousness, they will probably respond tactically to the prospect of legal accountability, much like a move in a game. If they draw upon an 'against the law' consciousness, taking a fatalistic stance to law's power, they may resist its power where opportunities arise. Equally, they may engage in collective efforts to subvert or alter the power of law. 69 It is only the 'before the law' consciousness that will prompt a deferential approach to the demands of legality. And just as in relation to families, we must explore the conditions under which particular narratives become salient for medical staff. What role do internal features of the medical organisation, such as legal advisors and complaints processes, play here? In relation to external features, Halliday has argued that the legal compliance of public bodies may be governed through a combination of hierarchical, community and market mechanisms. 70 In what ways, then, do regulators, auditors and courts shape the legal consciousness of medical staff? What role do professional organisations play, for example, in promoting or resisting legal values as professional values? 71 What capacity does the market have in this field to influence the perspectives of individual staff members and teams? All of these questions are important for a full understanding of law in this important domain of life and death -and everything in between. |
Among the negative effects of chronic diseases, the selfperceptions and the self-confidence of chronically ill persons deserve more research. This study explored how such persons dealt with the physical, mental and emotional changes brought about by the onset of chronic disease. The specific focus here was the role of social support networks in older patients' emotional coping. This qualitative study was conducted in two state-owned medical institutions in the northcentral part of Nigeria. In-depth interviews were conducted among 19 purposively selected, chronically ill persons aged 50 years and over who were receiving clinical care. This study revealed that except in extremely dire circumstances, older people with chronic conditions preferred to keep knowledge of their conditions strictly within their close family circles. It is almost a taboo to inform community members, friends and religious groups about one's chronic health difficulties. Reasons for the need to appear healthy to others might have stemmed from the fear of being discriminated against and attempts to maintain some level of normalcy when interacting with others. Moreover, social networks could also have a negative influence on older persons' emotional wellbeing. For example, many of the respondents received negative comments about their physical appearances. These statements resulted in participants having low self-esteem about their body images and consequently affected their participation in social activities. Thus, the supportiveness of social networks cannot be assumed. Outside of close family, social networks appear to be inadequately equipped to understand some of the sensitivities that chronically ill older persons struggle with. |
of work-life imbalance group, those who in the groups of work-life balance and work imbalance had significantly higher retirement planning score (β=5.5 and 1.51, respectively). The life imbalance group have the same retirement planning score as the work-life imbalance but did not differ significantly. In addition, engagement of life and work satisfaction had significant interaction effect to retirement planning. Discussion According to the results, employees who have work-life balance tend to have better retirement planning. The government should encourage employees to actively engage in life, such as domesticity, social participation, care, cultivate interest, among others. Higher engagement in the life domain can significantly promote employees retirement planning. Child and Famliy Studies,Yonsei University,Seoul,03722,Korea,2. Yonsei University,Seoul,Republic of Korea Guided by a life course perspective, the purpose of this study is to examine the linkages between adult children's outcomes for the transition to adulthood (employment, marital, and coresidence status) and their parents' psychological wellbeing, as well as whether these associations are similar for parental income. Regression models were estimated using data from 2,596 parents whose youngest child was at least 40 years old in the 2012 (the 4th wave) KLOSA (Korean Longitudinal Study of Ageing). Sons' employment and marital status and daughters' marital status (excluding children's coresidence status) was significantly associated with their parents' levels of wellbeing. Moreover, parents' income moderated the associations between children's outcomes and the level of their parents' life satisfaction. Unemployed sons and single sons and daughters jeopardized the life satisfaction of their mothers with low income, but not other subgroups, and coresidence with sons decreased the life satisfaction of fathers with high income, but not those with low income. In line with how the results suggest that parental psychological outcomes regarding adult children's circumstances may be different depending on income, this study has implications for intergenerational relationships in the sociocultural context. These findings also imply that parents may have different views about norms regarding the transition to adulthood depending on their economic backgrounds. In sum, based on the life course perspective and a stress process model, this study provides a comprehensive understanding of how adult children and family structural factors may contribute to individuals' wellbeing in old age.
---
THE RELATIONSHIPS BETWEEN KOREAN ADULT CHILDREN'S OUTCOMES AND THEIR PARENTS' PSYCHOLOGICAL WELLBEING
M. Lim 1 , H. Jun 2 , 1. Department of
---
THINKING ABOUT THE END OF LIFE WHEN IT IS NEAR: A COMPARISON OF GERMAN AND PORTUGUESE CENTENARIANS
D. Jopp 1 , K. Boerner 2 , K. Kim 2 , A. Butt 2 , O. Ribeiro 3 , L. Araujo 4 , C. Rott 5 , 1. University of Lausanne, 2.
---
University of Massachusetts Boston, 3. University of Aveiro & University of Porto -CINTESIS, 4. CINTESIS; ESEV. IPV, 5. Heidelberg University
Centenarians approach the end of their lives with certainty. Yet, little is known about their thoughts about the end of life (EOL). Comparing centenarians from Germany and Portugal, this study examined common thinking of and planning for the EOL among centenarians, and whether views on EOL are shaped by cultural contexts and individual characteristics. Centenarians from two larger population-based centenarian studies (87 German and 128 Portuguese) responded to five questions regarding their views on EOL. Using Latent Class Analyses, we identified patterns of EOL thoughts and examined differences in country and individual characteristics by the derived patterns. A significant portion of centenarians in both countries reported not thinking about the EOL, not believing in the afterlife, and not having made EOL arrangements; perceiving the EOL as threatening and longing for death were less commonly endorsed. LCA identified three latent patterns of EOL thoughts: Class 1 (EOL thoughts, EOL arrangements, and afterlife beliefs); Class 2 (EOL arrangements and afterlife beliefs); and Class 3 (overall low). The proportion of Portuguese centenarians was higher in Class 1, whereas the proportion of German centenarians was higher in Class 2 and 3. Class membership was also related to centenarians' demographic, social, and health characteristics. In sum, findings indicate that despite closeness to death, centenarians do not necessarily think about and/or prepare for the EOL. Given that lack of EOL planning can result in poorer EOL quality, enhancing communication among centenarians, family, and health care professionals seems imperative.
---
UGANDAN GRANDPARENT-CAREGIVERS: CONSEQUENCES OF CAREGIVING AND QUALITY OF LIFE IN THE HIV/AIDS ERA S. Matovu, M. Wallhagen, University of California, San Francisco
In this manuscript, we seek to highlight the consequences of caregiving and their impact on the health and overall quality of life of Ugandan grandparent-caregivers. Over the past two decades, the number of studies investigating grandparental caregiving provided to children affected by HIV/ AIDS in sub-Saharan Africa has gradually increased. With the sustained loss of lives due to AIDS, older adults are continuing to bear the burden of caring for children affected by the epidemic, often with very limited resources. Despite the acknowledgement of the elderly as the backbone and safety net of the African family in this HIV/AIDS era, very limited research has been conducted to explore the impact of this burden on the caregivers' mental health, physical wellbeing and overall quality of life. Thirty-two participants were recruited from urban and rural areas in Uganda and interviewed using a qualitative approach, specifically grounded theory methodology. The narratives generated from the semistructured and one-on-one interviews were audio-recorded, transcribed and analyzed using both open and axial coding as well as reflexive and analytic memoing congruent with the methodology. Descriptions of physical, financial and emotional caregiver burden were reported. Additionally, our study findings uniquely explored the impact of the perceived burden on their health and overall quality of life; and provided an explanatory model of the caregiving experience. Therefore, the study findings provide a foundation upon which clinicians, researchers and policy makers can design and implement effective interventions needed to improve the health and quality of life of grandparent-caregivers.
---
WHO AMONG JAPANESE EMPLOYEES PREPARES WELL FOR LIFE AFTER RETIREMENT?
K. Katagiri 1 , T. Onze 2 , 1. Kobe University, 2. Research Institute for Culture, Energy and Life, Osaka Gas
The life expectancy of Japanese people is one of the longest in the world. Although most Japanese companies still set the retirement age at 60, through a recent amendment of the law, people can continue to work until the age of 65 with different work conditions before retirement. As the lifetime employment system was popular until recently and the organization culture of companies reflects a vertically-structured society, the Japanese are not accustomed to making plans and decisions about their careers. Little is known on this topic in Japan. This study examines those who plan well for their lives after retirement. An internet survey was conducted in Tokyo and the Osaka metropolitan area in 2016 by the Research Institute for Culture, Energy and Life, Osaka Gas. The sub-sample consisted of 924 people, aged 40-80 years. The result revealed that only a low percentage (27%) of people in their fifties had planned for life after retirement. A logistic regression analysis was also conducted, in which demographic variables, social activity variables, social relations, and views of life were considered. People who were older, richer, single, house owners, participating volunteers, or those who had hobbies and valued their own way of life were more likely to have a definite plan after retirement. We observed no sex difference. Workaholics were at a higher risk of ill-preparation. The study therefore implies that an active private life other than work is necessary to sustain a long life after retirement. Objectives: Increased mortality after spousal bereavement has been observed in many populations. Few studies have investigated the widowhood effect in a traditional culture where the economy is underdeveloped. In this study, we assessed whether the widowhood-associated excess mortality exists and differs by gender and living arrangement in rural China. Methods: The data used in this longitudinal study come from the survey "Well-being of Elderly Survey in Anhui Province (WESAP)", which was conducted every three years between 2001 and 2015 in rural townships of Anhui province. Excluding cases with missing values and restricting the sample to respondents who were married or widowed with adult children at baseline and in follow-up, analyses were carried out on 2,471 adults aged 60 and above. Cox regression was applied to examine the effects. Results: Spousal loss decreased mortality for older rural Chinese and there was a gender difference in this effect. Analyses also show that living with adult children after spousal loss played a protective role in reducing the risk of older men's death, though it tended to increase older men's mortality risk in general. Conclusion: Our findings suggest that the widowhood effect is culturespecific and spousal loss reduces rather than increases the mortality risk of rural elders in China, which implies that
---
WIDOWHOOD AND MORTALITY RISK OF OLDER PEOPLE IN RURAL CHINA: DO GENDER AND LIVING ARRANGEMENT MAKE
|
Background: Spanish-speaking Latina breast cancer survivors experience disparities in knowledge of breast cancer survivorship care, psychosocial health, lifestyle risk factors, and symptoms compared with their white counterparts. Survivorship care planning programs (SCPPs) could help these women receive optimal follow-up care and manage their condition. Objective: This study aimed to evaluate the feasibility, acceptability, and preliminary efficacy of a culturally and linguistically suitable SCPP called the Nuevo Amanecer (New Dawn) Survivorship Care Planning Program for Spanish-speaking breast cancer patients in public hospital settings, approaching the end of active treatment.The 2-month intervention was delivered via a written bilingual survivorship care plan and booklet, Spanish-language mobile phone app with integrated activity tracker, and telephone coaching. This single-arm feasibility study used mixed methods to evaluate the intervention. Acceptability and feasibility were examined via tracking of implementation processes, debriefing interviews, and postintervention satisfaction surveys. Preliminary efficacy was assessed via baseline and 2-month interviews using structured surveys and pre-and postintervention average daily steps count based on activity tracker data. Primary outcomes were self-reported fatigue, health distress, knowledge of cancer survivorship care, and self-efficacy for managing cancer follow-up health care and self-care. Secondary outcomes were emotional well-being, depressive and somatic symptoms, and average daily steps. Results: All women (n=23) were foreign-born with limited English proficiency; 13 (57%) had an elementary school education or less, 16 (70%) were of Mexican origin, and all had public health insurance. Coaching calls lasted on average 15 min each (SD 3.4). A total of 19 of 23 participants (83%) completed all 5 coaching calls. The majority (n=17; 81%) rated the overall quality of the app as "very good" or "excellent" (all rated it as at least "good"). Women checked their daily steps graph on the app between 4.2 to 5.9 times per week. Compared with baseline, postintervention fatigue (B=-.26; P=.02; Cohen d=0.4) and health distress levels (B=-.36; P=.01; Cohen d=0.3) were significantly lower and knowledge of recommended follow-up care and resources (B=.41; P=.03; Cohen d=0.5) and emotional well-being improved significantly (B=1.42; P=.02; Cohen d=0.3); self-efficacy for | Introduction
---
Background
Women with breast cancer are living longer, and the number of survivors is increasing as the US population ages. Recognizing the need to address the long-term needs of cancer survivors, in 2006, the Institute of Medicine recommended that all cancer patients receive a survivorship care plan (SCP) with a summary of their treatments, follow-up care plan, and information on potential late effects, self-care, and resources [1]. In 2016, the American College of Surgeons Committee on Cancer developed an accreditation standard requiring cancer care programs to provide SCPs to all nonmetastatic patients treated with curative intent with annual evaluation of these plans [2]. However, providing patients with SCPs is ineffective unless cancer patients understand and know how to use this information.
Survivorship care planning programs (SCPPs), to be distinguished from SCPs alone, are patient-centered activation interventions providing information on recommended health care and self-care following cancer treatment [1]. SCPPs typically help patients understand and follow recommended care regimens and encourage healthy lifestyles. SCPPs can meet patients' information needs [3], improve communication with clinicians, and improve well-being [4]. In addition, SCPPs need to address healthy lifestyles as most cancer survivors tend to be overweight or obese and have sedentary lifestyles [5][6][7], particularly Latinos, [8] and strong observational evidence links these risk factors with poorer survival among breast cancer survivors [9]. Physical activity interventions, in particular, improve symptoms and health-related quality of life [10][11][12] and reduce the risk of recurrence and death among breast cancer survivors [13]. However, clinicians rarely provide lifestyle counseling to cancer survivors despite evidence that oncologists' recommendations are effective among cancer survivors [14,15]. Non-white cancer survivors, in particular, face ongoing informational needs to address fear of recurrence and management of symptoms, late effects of treatments, and lifestyle changes [16]. Latina breast cancer survivors experience disparities in knowledge of breast cancer survivorship, psychosocial health, lifestyle risk factors, and symptoms after treatment compared with their white counterparts [17][18][19][20][21]. Spanish-speaking Latina breast cancer survivors, especially, report many unmet medical, psychosocial, and informational needs that affect negatively their self-efficacy for managing survivorship [22][23][24]. SCPPs could help these women receive optimal care and manage their condition. Preliminary evidence suggests high acceptability of mobile health (mHealth) apps among Latino cancer patients because of a high need for Spanish-language information and support on disease and treatment effects [25].
---
Objectives
The objectives of this mixed-methods study were to develop and evaluate the feasibility, acceptability, and preliminary efficacy of a culturally and linguistically suitable SCPP called the Nuevo Amanecer (New Dawn) Survivorship Care Planning Program for Spanish-speaking breast cancer patients in public hospital settings as they approach the end of active treatment. The intervention was delivered via a written SCP and booklet, mobile phone app, and telephone coaching calls and aimed to decrease fatigue and health distress and increase knowledge and self-efficacy for managing cancer survivorship and physical activity levels.
---
Methods
We describe the intervention components and then methods for examining feasibility, acceptability, and preliminary efficacy.
---
Intervention
The 2-month intervention comprised 4 components: (1) hard copy of an individualized bilingual SCP, (2) Spanish-language survivorship information booklet, (3) Spanish-language mobile app called trackC with integrated activity tracker (Fitbit Zip), and (4) 5 weekly health coaching telephone calls in Spanish to reinforce survivorship care concepts and positive health behaviors. Combined, these components were designed to provide a support system for women's cancer survivorship needs. On the basis of Social Cognitive Theory, the individually tailored intervention was designed to improve outcomes by building self-efficacy for managing cancer (managing stress and fatigue by walking, recognizing symptoms, securing follow-up services, and communicating with physicians), using self-regulation tools of self-monitoring, goal setting, and feedback [26].
---
Written Spanish Language Survivorship Care Plan
We adapted the American Clinical Society of Oncology (ASCO) SCP template [27] for low-literacy, Spanish-speaking Latinas, simplifying the layout and translating it into Spanish. Adaptations were based on iterative review by a Latina psycho-oncologist, 2 oncologists, a bilingual oncology nurse, and 3 Spanish-speaking breast cancer survivors. Participants signed a medical release form, and study personnel extracted the information from medical records to complete the SCP. Completed SCPs were reviewed by the project director and the patient's oncologist or oncology nurse and scanned into the patient's electronic health record. This written bilingual SCP was given to participants at the second home visit.
---
Spanish-Language Survivorship Information Booklet
We selected the "ASCO Answers: Cancer Survivorship" guide because it was comprehensive, easy to understand, and available in English and Spanish [28]. The guide covers what to expect after active treatment, including psychological, physical, sexual, reproductive, financial, and work-related challenges.
---
TrackC Mobile App With Integrated Activity Tracker
The Spanish-language mobile app (trackC) was designed to contain women's breast cancer diagnostic and treatment history and provide information on potential side effects, healthy lifestyles, and survivorship resources. An activity tracker was integrated with the app to display progress toward a personalized daily steps goal. We selected the Fitbit Zip wireless activity tracker, henceforth referred to as activity tracker, based on cost, simplicity, and availability of an application programming interface (API, for integrating the Fitbit with other software applications). The mobile app home page contained 4 section tabs (Figure 1): Daily walks (caminatas diarias), treatment (tratamiento), follow-up care (cuidado de seguimiento), and managing symptoms (manejo de los síntomas). Content was based on ASCO treatment guidelines at the time. We summarize each section briefly:
• Daily walks: information on walking and integrated activity tracker that could be synced with the app so that it displayed a history of daily steps and their average daily steps target (Figure 2).
---
•
Treatment: screens for entering cancer diagnosis and treatment information that could be updated as needed and emailed to others, including clinicians.
• Follow-up care: general follow-up recommendations for women with noninvasive breast cancer; specific follow-up recommendations for those receiving radiation, tamoxifen, aromatase inhibitors, and women experiencing premature menopause; option to record pending medical appointments and receive reminder notifications.
• Managing symptoms: information on signs of recurrence, treatment side effects, daily exercise, nutrition, and cancer survivorship resources.
---
Developing and Testing TrackC
In phases, we developed mock-ups, a detailed wire frame, and a prototype of the app employing user-centered testing [29] with iterative review and pretesting by 3 Spanish-speaking Latina breast cancer survivors; a Latina psycho-oncologist (breast cancer survivor); an oncologist serving ethnically diverse, low-income cancer patients; and 6 bilingual-bicultural study staff members. The prototype was developed in English and then translated into Spanish using rigorous forward translation and team reconciliation methods.
---
Health Coaching Protocol
The coaching protocol was based on evidence-based motivational interviewing and health coaching techniques, which seek to actively engage patients in managing their health within their social contexts [30]. The health coach encouraged use of trackC, walking, reporting symptoms to clinicians, and calling clinicians to ask questions. Communication with clinicians was emphasized because of evidence that Latino patients often lack the confidence to report symptoms or ask questions, especially when the physician speaks a different language [31,32]. The health coach reinforced cancer survivorship information. The health coach was a bilingual-bicultural Latin American-trained internist with extensive health coaching experience. Coaching consisted of 5 weekly phone calls with the following structure:
(1) review of progress toward daily steps goal and working through any barriers, (2) daily steps goal setting for the coming week, and (3) information on a weekly health topic. The 5 health topics paralleled the trackC content and included: (1) walking and nutrition, (2) breast cancer follow-up care, (3) signs of recurrence, (4) treatment side effects, and (5) resources and review of content from the first 4 calls. The health coach used a manual, but tailored the content based on participants' needs.
---
Study Design and Procedures
This single-arm feasibility study was conducted between February and June 2017, with women recruited from 2 public hospitals in Northern California. All study materials, including the app, were translated into Spanish using team translation and expert review and reconciliation by 6 bilingual-bicultural research staff. The study provided all participants with an iPhone and covered the costs of the data plan. Participants were compensated a total of $60 for completing 2 assessments (baseline and post intervention). During the 2-month study, the same trained bilingual-bicultural research associate (RA) conducted 3 scheduled home visits: (1) enrollment visit (baseline assessment), (2) 1-week visit at the end of the activity tracker run-in period, and (3) final end-of-study visit (postintervention assessment). This protocol was approved by the University of California San Francisco and Contra Costa Regional Medical Center and Health Centers institutional review boards.
---
Eligibility and Recruitment
Eligibility criteria consisted of: (1) self-reported Spanish-speaking Latina, (2) diagnosed with nonmetastatic breast cancer, and (3) within 1 year of termination of active treatment (except for hormonal therapy). Exclusion criteria included walking more than 30 min on 5 days per week or more. Using lists of potentially eligible participants provided by the hospital sites, we mailed them bilingual initial contact letters and postage-paid refusal postcards. A total of 2 weeks later, women who had not returned a refusal postcard were contacted in person or on the telephone by trained bilingual-bicultural RAs to conduct eligibility screening, ask about mobile phone usage, and schedule an appointment to visit the participant's home within 1 week.
---
Study Enrollment-Home Visit 1
The RA conducted the enrollment visit (45-60 min) at the clinic site or the participant's home during which the study was explained in detail, written informed consent was obtained, participants signed a medical release form, and the baseline survey was completed. This marked the start of the 1-week run-in period. Women were provided with a masked activity tracker (hidden daily steps display) and instructions to wear it every day for a minimum of 10 hours per day and not to change their usual activity levels. This run-in period was used to establish participants' baseline average daily steps and personalized goal (average daily steps during run-in period + 2000 steps).
---
End of 1 Week Run-In Period-Home Visit 2
In this 1-hour visit, participants received materials and verbal instructions on the use of the written SCP; survivorship booklet; iPhone and charger with trackC app installed; unmasked activity tracker (with visible daily steps and goal graph); and a step-by-step illustrated guide on how to use the iPhone, app, and activity tracker devices. The RA reviewed the SCP, survivorship booklet, device guide, and the individualized average daily steps goal to be achieved within 2 months. Women were instructed on synchronizing the tracker and mobile app at the end of every day to update the app's average daily steps graph. The RA helped participants enter diagnostic and treatment information from the written SCP into trackC.
---
End of Study-Home Visit 3
At this visit, the RA conducted the final assessment and a brief satisfaction survey, synchronized the activity tracker with the Fitbit app to update the final daily steps data, and collected the mobile phone and charger. Participants were allowed to keep the tracker and encouraged to continue to maintain a daily exercise routine. Upon returning to the office, the RA logged in using the participant's study Fitbit account credentials and downloaded the Fitbit steps data to the study computer.
---
Acceptability and Feasibility Measures
Acceptability and feasibility were examined via tracking of implementation processes evaluation indicators, debriefing interviews, and postintervention satisfaction surveys.
---
Implementation Processes
An electronic database (REDCap) was developed to track usability issues [33]. This system contained data from multiple sources, including phone calls from participants, issues reported by the health coach, daily review of the mobile app back-end database, RA and project director tracking forms and notes, and timing of software updates for the activity tracker. Mobile app data were sent to the study's secure database via encrypted transmission. If the mobile phone or app lost connectivity, data were transmitted the subsequent time the app was connected to the internet.
---
Coaching Call Indicators
The health coach recorded attendance and duration for the 5 calls. At every call, women were asked how many times in the past week they had synced their activity tracker with the trackC app and checked the app's average daily steps graph and if they had experienced any problems doing this. On calls 1 and 3, women were asked how difficult they found it to use the graph (5-point response scale was 0=not at all to 5=difficult). After every session, the health coach was asked to rate how much of the material she felt the participant had understood (5-point response scale was 0=none to 5=all).
---
Debriefing Interviews
Semistructured debriefing interviews were conducted with a subset of participants to ask about their study experiences and suggestions for improvement. Selection of women was stratified to include those who had an iPhone versus other type of mobile phone or none, aged <50 versus ≥50 years, and from the 2 study sites. A trained bilingual-bicultural Latina interviewer (not the RA who conducted home visits) used an interview schedule that asked about their experience using the app (eg, what they did and did not like), ease of use, perceived usefulness for managing their cancer, and facilitating factors.
---
Satisfaction Survey
A 5-min satisfaction survey was administered at the final home visit after downloading participants' activity tracker data for the study period and the final assessment. The survey asked them to rate the program's perceived quality, ease of use, and usefulness. Overall quality of the app was assessed using a 5-level response set of "poor," "fair," "good," "very good," or "excellent." Ease of use was assessed by asking about the overall difficulty of using the trackC app, syncing, and using the treatment summary, with response options of "not at all hard," "a little hard," "somewhat hard," "quite hard," or "very hard." Perceived usefulness was assessed by asking participants to rate how useful the app and health coach were for helping them gain a sense of control over their health and how useful the app was for keeping their cancer treatment information in one place and knowing about cancer symptoms and treatment side effects to monitor. Response options for the usefulness ratings were "not at all," "a little useful," "somewhat useful," "quite useful," or "very useful."
---
Efficacy of Intervention Measures
To assess preliminary efficacy, we conducted baseline and 2-month interviews using structured surveys to examine changes in symptoms, knowledge, and well-being. Changes in pre-and postintervention average daily steps count were assessed based on activity tracker data.
---
Primary Outcomes
We measured 6 self-reported primary outcomes: 2 on symptoms, 3 on knowledge of cancer survivorship care, and 1 on self-efficacy for managing their cancer follow-up health care and self-care.
The 2 symptoms assessed were cancer-related fatigue and health distress. We adapted the Patient-Reported Outcomes Measurement Information System (PROMIS) Cancer-Fatigue Scale, which assesses the extent of fatigue and its impact on daily life over the past 7 days [34]. We dropped 1 item ("enough energy to exercise strenuously") and added 2 items from the PROMIS Cancer Fatigue Short Form [35]: "felt tired when hadn't done anything" and "limited social activities because of fatigue." The final 7 items assess 4 aspects of severity (frequency that they felt tired, tired even when hadn't done anything, extreme exhaustion, run out of energy) and 3 aspects of interference with daily life (frequency with which fatigue limited work, thinking clearly, taking bath or shower). To assess health distress, we selected 4 items from the Medical Outcomes Study Health Distress Scale [36] that asked how much of the time during the past month they felt discouraged, fearful, worried, or frustrated by their health problems. Response options for both fatigue and distress scales were as follows: "never," "rarely," "sometimes," "often," or "always." Scale score were the mean of nonmissing items, with higher scores indicating greater fatigue effects (Cronbach alpha=.85) or health distress (Cronbach alpha=.91).
The 3 knowledge measures consisted of 2 global single item measures and 1 6-item scale. The 2 single items of global knowledge of survivorship care asked how true the following statements were for them: "you know what to expect now that your initial treatment has finished" and "you know how to take care of yourself after cancer." The new scale assessed knowledge of follow-up care and ease of finding information. A sample item is "How true is the following statement for you: you know the possible side effects of your cancer treatment?" Response options for the 3 knowledge measures were 0=not at all true to 4=completely true. The scale was scored as the mean of nonmissing items with higher scores indicating greater knowledge (Cronbach alpha=.82).
A new 8-item self-efficacy for managing cancer care scale assessed confidence in ability to do what is needed to manage health care and health after cancer. A sample item is "How confident are you that you will be able to call your doctor if you have a question about a symptom that might be related to your cancer or treatment?" with response options of 0=not at all confident to 4=completely confident. The scale was scored as the mean of nonmissing items with higher scores indicating greater confidence (Cronbach alpha=.90). These new measures assessing women's sense of control over their survivorship care drew on published questionnaires [37,38].
---
Secondary Outcomes
Secondary outcomes included emotional well-being, depressive and somatic symptoms, and average daily steps as recorded by the activity tracker.
Emotional well-being was assessed with the 6-item Emotional Well-Being Scale from the Functional Assessment of Cancer Treatment-General [39]. Scores range from 0 to 24, with higher scores indicating more well-being (Cronbach alpha=.77). We used the Patient Health Questionnaire 8-item version to assess depressive symptoms [40]. Scores range from 0 to 24, with higher scores indicating more depressive symptoms (Cronbach alpha=.64). We used the 6-item Brief Symptom Inventory Somatization Scale, which assesses the extent to which they were bothered by symptoms such as faintness and dizziness, pains in heart or chest, nausea, trouble getting their breath, numbness or tingling, and feeling weak [41]. Scores range from 0 to 4, with higher scores indicating more symptoms (Cronbach alpha=.76).
Baseline steps were calculated as the average daily steps during the 1-week run-in period (total steps divided by number of days) before the intervention start date. Postintervention steps were calculated as the average daily steps during the last week of the 2-month study period. Pre-post changes in average daily steps were calculated as the postintervention average daily steps minus the preintervention average daily steps.
---
Analyses
Descriptive statistics were used to analyze sample characteristics and satisfaction survey responses. Debriefing interviews were transcribed verbatim in Spanish. A total of 3 bilingual-bicultural RAs independently performed content analyses of all transcripts, and discrepancies were resolved through team meetings. Linear mixed models were used to assess mean pre-post differences on primary and secondary outcomes; controlling for hospital site; and reporting unstandardized betas, P values, and Cohen d as an estimate of effect size.
---
Results
---
Demographic Characteristics
Of 100 women in the sampling frame, 23 enrolled in the study, 17 were ineligible, 17 could not be reached, 7 had incorrect contact information, 34 refused, and 2 were deceased. Mean age of participants was 55.8 years (SD 13.1), all were foreign-born and limited English proficient, most had an elementary school education or less (n=13), over half were of Mexican origin (n=16), and all had public health insurance (Table 1). About half (n=11) reported financial hardship in the past year, and most reported a comorbid chronic condition (n=17). The majority had breast conserving surgery (n=14) and both radiation and chemotherapy (n=15). Only 1 woman reported not owning a mobile phone.
---
(100)
Use mobile phone to make calls at least once a week during the last month, n (%) 14 (64) Send a short message service text message using mobile phone at least once a week during last month, n (%) 15 (68) Use mobile phone to access the internet, n (%)
---
Acceptability and Feasibility
---
Implementation Processes
Nonscheduled home visits by the RA to all participants became necessary because participants requested help with the trackC app, activity tracker, or phone, or study staff noticed a lack of data transmission from trackC to the app backend database. A total of 63 nonscheduled visits occurred (mean=3 per participant, SD 1.9; range 1-7), during which the RA would troubleshoot technical and user issues and provide additional support and instruction. Most issues were related to technical (46 instances because of the app host site expiring, activity tracker software updates, or the app and tracker not syncing) or hardware-related problems (22 instances of activity tracker needing a new battery or the iPhone locking them out). Some were related to user issues (28 instances of forgetting to sync or how to do it, not knowing how to swipe out of an app section, or losing the activity tracker).
---
Coaching Call Indicators
Coaching calls lasted on average 15 min each (SD 3.4). A total of 19 of 23 participants (83%) completed all 5 coaching calls, 1 woman completed 4 calls, 1 woman completed 1 call, and 2 women completed no calls. Number of times per week that women synced their activity tracker and app ranged from 4.4 to 5.7. Number of times per week that women checked their daily steps graph on the app ranged from 4.2 to 5.9. Ratings of the difficulty with using the daily steps graphs at call 1 and call 3 were almost identical, with most women (12 at call 1, 11 at call 2) rating it as not at all difficult. A total of 3 women reported vision problems interfered with reading the app screens. On the basis of the coach's ratings, the number of women understanding all of the material ranged from 17 (81%) for call 1 (daily steps and goal-setting) to 20 (100%) for call 3 (signs of recurrence).
---
Debriefing Interviews
A total of 10 semistructured postintervention debriefing interviews were conducted (Table 2). Participants were aged 56 years on average, and most were from Mexico (Mexico=7, Guatemala=2, and Nicaragua=1). All participants reported elementary school completion or less. In general, participants reported positive attitudes toward the program and increased awareness of the importance of walking. Themes emerging from the interviews are described next.
---
Perceived Usefulness of Intervention Components
Participants voiced appreciation for the trackC app information about their disease, treatments, side effects, and signs of recurrence, having felt misinformed about cancer survivorship before the study. All the women wanted the written SCP in addition to the app version. They reported feeling motivated and supported by the weekly check-ins with the health coach because she provided them with tailored, detailed, and credible information and support; helping them understand their disease, symptoms, and bodies; and achieve their walking goal. Participants valued the visual and auditory instant feedback provided by the activity tracker and app, for example, applause received after achieving their daily goal, helping them maintain a positive attitude toward walking.
---
Perceived Ease of Use of Mobile App
Participants described varied experiences about the effort required to navigate and use the app. Users with mobile phone experience found the app easy to use. However, 4 of 10 participants with little or no mobile phone experience expressed that use of the app required more effort and support at the beginning of the study. Some participants reported difficulties because of poor literacy or poor eyesight. All women reported being satisfied with the app's interface, fonts, colors, and visuals.
---
Perceived Benefits of Intervention
Informants reported positive outcomes related to walking. A total of 7 of 10 women reported enhanced physical health because of their participation in the study, including weight loss, improved digestion and bowel movements, and improved sleep. Participants also reported improved emotional well-being, that is, decreased stress and better mood.
---
Social Norms
A total of 3 women felt a sense of accountability because they knew their steps were being monitored by themselves and others. Women reported that social support and encouragement from family members and neighbors pushed them to achieve their daily goal. Finally, several women expressed a shift from being extrinsically motivated by the app and coach to increase their walking to being intrinsically motivated because they wanted to do it for themselves.
---
Illustrative quote Theme and subtheme
---
Perceived usefulness of intervention components
"The app where you could find information you could trust. You see so many things on the internet, a home remedy, but nothing where you feel sure that what they are telling you is true." (ID 9015) App provided credible information about healthy lifestyles, side effects of treatments, and signs of recurrence.
"What motivated me to walk was wearing the pedometer to see how much I could walk in one day and that this was recorded (on the app) so that I would not forget how much I had walked the day before and the day before that." (ID 8027) Feedback provided by activity tracker and app graph of daily steps progress over time were motivating "It seemed really important to me that when you met your goal, it was as if it (the applause) were saying, 'Yay, you won!' as if you had won a prize…and I liked it." (ID 9015) Visual and auditory positive feedback from the app for steps taken (graphs of progress toward goal, cheering sounds) were motivating "Yes, she (health coach) really helps you. She motivates you to walk, how to take care of yourself, your health, what you should discuss with your doctor in case you feel something.
Health coach provided detailed, tailored information on their specific treatments and potential side ef-She (health coach) tells you, you need to be aware of your body and report anything unusual, like pain, to the doctor. She gives you great advice." (ID 8010) fects and follow-up care and motivation and support for walking "Setting goals helped me focus...That helped me a lot. I used to not take my dogs for a walk, I would let them just run around here, but now I take my dogs for a walk so I can get more steps." (ID 9001)
---
Goal setting provided motivation for walking
Perceived ease of use of the mobile app "It was a little hard, but then I read the instructions that they had given me. I have a cell phone, but I only use it for emergencies and to communicate with my children. But my cell Ease of use varied with prior experience using mobile phone phone is very basic and the one I use here (for the study) is more advanced. But after a while, I got the hang of it." (ID 9002)
"The button was in the corner and I would push it two or three times to get it to work. You need to have more room to be able to push the button." (ID 8040)
Appearance, fonts, font size and colors-were satisfactory but a few suggested larger font and navigation buttons
---
Perceived benefits of the intervention
"The walking is so good. I used to feel stressed, very tired, with no energy, and it all went away. At first, when I started walking, I would get tired, but now, I can't believe it. After walking so much, I don't get tired." (ID 8010) Walking was a commitment that they made upon joining the study "I would get excited when I would open the app and the stars would come out. And my little boy would say, 'Well, let's go walk so we can see you meet your goal.' And I would say, Yes, let's go! And my kids would say, 'Mami, aren't you going to walk today?'" and I would answer, 'Yes, go get me the cell phone' (laughs). They, too, were involved." (ID 9002)
---
Encouragement of family and friends
---
Satisfaction Survey
A total of 21 of 23 women completed the final assessment, for a retention rate of 91%.
---
Overall Quality
The majority of the women (17/21; 81%) rated the overall quality of the app as very good or excellent (all rated it as at least "good"). The overall quality of the information received on how to use the trackC app was rated as very good or excellent by 16 women (76%); all rated it as at least good.
---
Ease of Use
Most women (15/21; 71%) rated the ease of syncing the trackC app and activity tracker as being not at all hard (Table 3). Fewer respondents reported it being not at all hard to use the treatment summary found in the trackC app (11/21; 52%).
---
Usefulness
Regarding their ratings of the usefulness of the SCPP for feeling more in control of their health, all except for 1 woman rated the health coaching calls as quite or very useful, and all women reported the trackC app as quite or very useful. Almost all women (n=19) reported that the trackC app was quite or very useful for keeping their cancer treatment information in one place. Having information on trackC about cancer symptoms and side effects were both reported as being quite or very useful by 18 and 19 respondents.
---
Efficacy of Intervention
---
Primary Outcomes
Regarding primary outcomes, compared with baseline, fatigue (B=-.26; P=.02; Cohen d=0.4) and health distress levels (B=-.36; P=.01; Cohen d=0.3) were significantly lower post intervention (Table 4). Women reported significantly greater knowledge of recommended follow-up care and resources after the intervention (B=.41; P=.03; Cohen d=0.5); self-efficacy for managing cancer follow-up care did not change.
---
Secondary Outcomes
Of the secondary outcomes, emotional well-being improved significantly post intervention (B=1.42; P=.02; Cohen d=0.3). Women's average daily steps increased significantly from 6157 to 7469 (B=1311.8; P=.02; Cohen d=0.5). a Controlling for study site and using intent-to-treat analysis (includes 2 participants who did not complete the postintervention survey). b Adapted 7-item Patient-Reported Outcomes Measurement Information System Cancer Fatigue Scale-Short Form; possible range=1-5, high score=more fatigue. c 5-item subset of the Medical Outcomes Study Health Distress Scale; response options of 1=none of the time to 5=all of the time; possible range=1-5, high score=more health distress. d New single item "How true is the following statement for you: you know what to expect now that your initial treatment has finished?" with response options of 0=not at all true to 4=completely true. e New single item "How true is the following statement for you: you know how to take care of yourself after cancer?" with response options of 0=not at all true to 4=completely true. f New 6-item knowledge of follow-up care scale with response options of 0=not at all true to 4=completely true; possible range=0-4, high score=greater knowledge. g New 8-item self-efficacy for managing cancer care scale with response options of 0=not at all confident to 4=completely confident; possible range=0-4, high score=more confident. h Emotional Well-being Scale of the Functional Assessment of Cancer Therapy-General; possible range=0-24, high score=better emotional well-being.
i Patient Health Questionnaire 8-item Scale; possible range=0-24, high score=more depressive symptoms. j Brief Symptom Inventory Somatization Scale; possible range 0-4, high score=more symptoms.
k Calculated as the average daily steps during 1-week run-in period before intervention start and last week of the 2-month study period.
---
Discussion
---
Principal Findings
This study sought to develop and test the preliminary acceptability, feasibility, and efficacy of a multicomponent breast cancer SCPP designed for Spanish-speaking breast cancer survivors. The intervention consisted of a bilingual individualized written SCP, a Spanish language survivorship information booklet, a mobile app called trackC with an integrated activity tracker, and health coaching calls. We found preliminary support for the program, with significant 2-month improvements in fatigue, health distress, and emotional well-being and increased knowledge of recommended follow-up care and average daily steps.
Women reported checking their daily steps graph about 5 times per week and the majority indicated the app was not difficult to use. The majority of women rated the quality of the app as "very good or excellent." Participants were motivated by the visual and auditory instant feedback provided by the activity tracker and app. In qualitative debriefing interviews, most women indicated that the app and coaching were useful for giving them a sense of control over their health, that the app provided a useful place for storing cancer and treatment information in one place, and that the SCPP resulted in increased physical activity, weight loss, and improved digestion and sleep. These results are consistent with similar studies that have demonstrated preliminary satisfaction with or interest in mobile phone app-based survivorship information among Latina [42] or non-white cancer survivors [43].
---
Lessons Learned
Although women were receptive to the SCPP overall, we learned a number of lessons. First, women preferred receiving both the mobile and written versions of their bilingual SCP, so a mobile app alone might not suffice. Further customization of SCPs to include breast cancer type-specific information, for example, hormone receptor status, would be helpful. We were able to provide this level of customization via the health coaching, but this level of customization of the app exceeded the budget of this pilot study but could be addressed in future studies. We did not anticipate the extent of technical issues involved in maintaining communication between the trackC app, the activity tracker API, and the database management API. Unanticipated updates in the APIs of the activity tracker or database management system necessitated unscheduled home visits to install these updates as participants often did not know how to do this. Women sometimes forgot to wear the activity tracker or sync their trackC app and tracker. A small number of women with limited mobile phone experience, low literacy, or vision impairments indicated some difficulty in navigating the app, thus, the app would need to be further tailored and tested to meet their needs. For some women with limited iPhone or mobile phone experience, individualized assistance in learning how to use apps was needed; for example, knowing how to swipe to advance to next screen required repeated reinforcement. Regarding the design of the app, in the future, we would enlarge and centrally position the button used to sync the app and activity tracker app as suggested by some women.
---
Limitations
This study has limitations. As a feasibility study, we did not include a control group and the sample size was small. As the study was conducted in Northern California with mostly Mexican women, results may not generalize to other regions or Latino national origin groups. In addition, because this was a multicomponent intervention, we are not able to isolate the relative effects of each of the components. Finally, we experienced a high refusal rate (60%), much higher than in our prior studies with women from the same population, so the final sample may be not be representative of Spanish-speaking Latinas in our region. Notably, this study coincided with a period of increasing immigration raids and heightened fear in local Latino communities. In our study, one of the most common reasons women gave for refusing to participate was fear that they would be tracked by immigration officials via the Fitbit wearable device.
---
Implications
Mobile phones offer promise as an excellent delivery mode among Latinos because of their widespread use of web-enabled phones to access the internet [25,44,45]. Mobile app interventions can be adapted for those with visual or auditory impairments and low literacy. Supplemental training and telephone health coaching can be provided to those with limited experience using mobile phones and to sustain levels of mobile app use. For many vulnerable populations, mHealth approaches alone may not suffice and more personal and intensive delivery modes will be needed. Some segments will prefer not to use mobile apps.
---
Conclusions
Our pilot study results support investment in testing of smart phone and health coaching SCCPs among Spanish-speaking Latina breast cancer survivors. Additional research employing user-centered testing can identify the appropriate combinations of delivery modes and intensity of SCPPs for vulnerable subgroups of cancer survivors. Harnessing technology to address the needs of these groups ensures equitable access to its potential health benefits related to self-care and long-term cancer survivorship outcomes.
---
XSL • FO
---
RenderX
|
Background: Perceived stigma has greatly influenced the life quality of the COVID-19 patients who recovered and were discharged (RD hereafter). It is essential to understand COVID-19 stigma of RD and its related risk factors. The current study aims to identify the characteristics of perceived COVID-19 stigma in RD using latent profile analysis (LPA), to explore its psycho-social influencing factors, and to determine the cut-off point of the stigma scale using receiver operating characteristic (ROC) analysis. | Introduction
COVID-19 has emerged as a global health emergency and posed a great threat to almost all countries and regions (1). It affects all segments of the population, especially the patients of COVID-19 (2). The impact is far beyond merely physical concerns. Previous studies have shown that the pandemic has led to psychological problems among patients, healthcare workers, and other caregivers (3,4). Patients infected with COVID-19 not only suffered from illness, but also had mental health problems due to viral infection and worries about after-effects (5). Perceived stigma is prevalent among COVID-19 survivors and healthcare workers in COVID-19 designated hospitals, which has an interrelated bearing on their mental health (6,7).
In post pandemic era, most patients of COVID-19 have been discharged (8). The mental health of those who had recovered from COVID-19 and been discharged from hospital (RD hereafter) deserve more attention during their rehabilitation (9). These patients were isolated during treatment and had limited freedom and communication with the outside world (10). Thus, their negative emotions cannot be alleviated in a short period of time. RD may have a more serious sense of loneliness and repression, as well as a higher level of psychological pressure (11). In the aftermath and the longcovid period, they may experience depression, anxiety, fatigue, posttraumatic stress disorder, and neuropsychiatric syndromes (12)(13)(14). Poor mental health condition will impact one's social behaviors and cognitive functions. As a result, RD's mental health should be attached much importance.
RD's mental health condition might affect their perceived COVID-19 stigma (15). Perceived stigma is one's personal feelings about the stressors and his projection of the feelings on others (16). From the patient's perspective, they might feel being stigmatized if their mental health condition was poor. COVID-19 RD are at high risk of PTSD, partly because of their near death experience, delirium, and ICU-related trauma during the COVID-19 experience (17,18). They might have uncontrollable thoughts about the experience and their image in others' mind, which would increase their perceived stigma. Perceived stigma might also in turn predict PTSD (19). Depression is another prevalent mental issue among COVID-19 RD (20). RD with depressive symptoms might be more sensitive and pessimistic to the negative attitudes from the community, which makes them feel more stigmatized emotions (21). Besides, to contain the spread, patients are required to stay in close isolation during treatment and reduce their movement after discharge, which may lead to feelings of loneliness and fear of discrimination, thus increasing their perceived stigma (22). Peace of mind is important for them to manage stressful situations, as well as avoid the irresistible but unwanted impulses (23). Resilience is not a linear path toward happiness, but a combination of behaviors that encourages individuals and communities to persevere and move forward confronting difficult situations (24,25). Higher level of resilience might decrease the risk of developing psychological distress, and suppress suicidal thoughts and insomnia (26,27). Resilience might be influenced by job stress, perceived stress, and mindfulness, and be promoted by brief resilience interventions based on positive psychology (28)(29)(30). Thus, with higher level of peace of mind and resilience, patients will control their emotions better and be less sensitive to the negative attitudes from others, which might result in lower sense of perceived stigma. From the society's perspective, low perceived social support may also lead to perceived stigma among COVID-19 RD (31). Perceived stigma might in turn increase the mental problems among RD and be detrimental to their mental health recovery (32). Therefore, the stigma among COVID-19 RD may have a certain impact on the whole population.
The perceived COVID-19 stigma in RD could be evaluated by a modified 12-item HIV stigma scale, which contains 4 sub-scales to measure personalized stigma, disclosure concerns, concerns about public attitudes, and negative self-image (33). However, this scale has no cut-off point, which makes it hard to precisely evaluate the stigma among RD. Clinical psychiatric interviews are usually regarded as the gold standard for diagnosis and the criterion for determining cut-off points of screening tools, However, the identification and diagnosis of cases with perceived COVID-19 stigma has not reached a consensus. Additionally, the characteristics and prevalence of perceived COVID-19 stigma among RD and its psycho-social influencing factors remain elusive. Currently, most previous studies focused on the recursive effect of perceived stigma on mental health without considering the possible vicious circle between mental health and perceived stigma among RD. While according to the theory of socio-ecological model, one is not a passive recipient of life events, but a key role in constructing and modifying the living system (34). It is therefore important to explore the influencing factors of perceived COVID-19 stigma among RD. The specific objectives of current study are to identify the characteristics of perceived COVID-19 stigma in RD using latent profile analysis (LPA); to explore the psycho-social influencing factors of perceived COVID-19 stigma in RD; and to determine the cut-off point of the stigma scale using ROC analysis for further evaluation and application, which may help healthcare professions and policymakers to deal with the increasing stigma and control the pandemic effectively.
---
Methods
---
Study design and participants
The cross-sectional study was carried out among previouslyinfected COVID-19 patients in Jianghan District (Wuhan, China) from June 10 to July 25, 2021. Extracted from the electronic medical records of the Jianghan District Health Bureau, a total of 3,059 COVID-19 patients met the inclusion criteria and were eligible for the study, for they were infected with the original SARS-Cov-2 strain and were diagnosed between December 10, 2019 and April 20, 2020. When they were receiving clinical re-examination, 1,601 COVID-19 survivors were invited to complete a questionnaire survey on their mental health status, and 1,541 of them who finished the survey were included in the study. All investigators and support staff in this study were trained according to the same protocol and required to have an educational background in medicine or public health. From June to July 2021, the online structured questionnaire was distributed to those who had a history of COVID-19 infection and had been discharged. All participants' digital informed consent was obtained to ensure their voluntary participation. An online survey platform Redcap was used to disseminate the self-administered electronic questionnaires and digital consent to the target population.
---
Stigma
The Short Version of COVID-19 Stigma Scale (CSS-S) is a 12-item scale that is employed for evaluating the perceived stigma of patients of COVID-19 during the past 2 weeks (33). The scale was reviewed by several experts in the field and was approved to use in this population. Each item is scored on a Likert scale of 1-4. Higher total scores indicate greater stigmatization. In this study, the Cronbach's alpha of the instrument was 0.936.
---
Post-traumatic stress disorder
The Impact of Events Scale-Revised (IES-R) is a 22-item scale aimed at screening posttraumatic stress symptoms in adults or older people. The items of this instrument are rated on a 5-point Likert scale from 0 to 4 (35,36). The IES-R contains three dimensions measuring intrusion, avoidance, and hyperarousal. Respondents rate their degree of distress during the past 7 days after they have identified a specific stressful life event that occurred to them. A total score equal to or above 35 can be regarded as positive PTSD symptoms. This instrument has been proven valid and reliable among Chinese COVID-19 patients (37). In this study, the Cronbach's alpha of the instrument was 0.965.
---
Anxiety
The Generalized Anxiety Disorder Questionnaire (GAD-7) consists of 7 items that are rated on a 4-point Likert scale from 0 to 3. It was developed for measuring the severity of generalized anxiety symptoms during the past 2 weeks (38). The scores of the instrument range from 0 to 21. A cutoff score of ≥ 5 is recommended for considering significant anxiety symptoms. This instrument has demonstrated to be reliable and valid among the Chinese population (39,40). In this study, the Cronbach's alpha of the instrument was 0.951.
---
Depression
The Patient Health Questionnaire (PHQ-9) is a 9-item questionnaire that is used for screening and monitoring depression of varying degrees of severity during the past 2 weeks (41). The items of the PHQ-9 are rated on a 4-point Likert scale ranging from 0 to 3. The total score is utilized to assess the degree of depression of participants, with scores of ≥ 5 indicating depression. This instrument has been validated among various Chinese populations (42,43). In this study, the Cronbach's alpha of the instrument was 0.914.
---
Sleep disorder
The Pittsburgh Sleep Quality Index (PSQI) consists of 18 items and is used to measure an individual's quality of sleep during the past 2 weeks (44). It contains seven components including subjective sleep quality, sleep latency, sleep duration, sleep efficiency, sleep disturbance, use of sleep medication, and daytime dysfunction, and each component is a 4-point Likert scaled from 0 = no difficulty to 3 = severe difficulty. The total scores range from 0 to 21 and a cutoff score of ≥ 6 is recommended for considering certain sleep disorders (45). This instrument has been validated among Chinese population (46). In this study, the Cronbach's alpha of the instrument was 0.784.
---
Fatigue
The Fatigue Scale-14 (FS-14) is a 14-item scale aiming at measuring the severity of fatigue during the past 2 weeks (47). The items of this instrument are rated on a 2-point scale of 0-1. The FS-14 contains two dimensions measuring physical fatigue and mental fatigue, respectively. Higher total scores of the 14 items indicate a higher level of fatigue. This instrument has been proved valid and reliable among Chinese (48). In this study, the Cronbach's alpha of the instrument was 0.845.
---
Resilience
The Resilience Style Questionnaire (RSQ) consists of 16 items that are rated on a 5-point Likert scaled from 1 to 5. It is used to measure the level of an individual's resilience during the past 2 weeks (49). Higher total scores of the 16 items indicate a greater ability to recover from negative events. This instrument was developed and validated among the Chinese rural left-behind adolescents and non-local medical workers (50,51). In this study, the Cronbach's alpha of the instrument was 0.975.
---
Social support
The level of perceived social support of the participants was measured by two items including emotional support and material support during the past 2 weeks (52). The items were: (1) "How much support can you obtain from family/friends/colleagues when you need to talk or to obtain emotional support?" and (2) "How much support can you obtain from family/friends/colleagues when you need material support (e.g., financial help)?" and each item was 11-point Likert scaled from 0 to 10. In this study, the Cronbach's alpha of the instrument was 0.819.
---
Peace of mind
The Peace of Mind Scale (PoM) comprises a total of 7 items rated on a 5-point scale ranging from 1 ("not at all") to 5 ("all of the time") and is used for measuring the peace of mind during the Frontiers in Public Health 04 frontiersin.org past 2 weeks (53). Higher total scores indicate a more peaceful mind. This instrument has been validated among Chinese population (53). In this study, the Cronbach's alpha of the instrument was 0.874.
---
Statistical analysis
Descriptive analyses were performed to describe the participants' demographic characteristics, clinical characteristics, the condition of perceived stigma, and potential influencing factors.
In the absence of an accurate and precise reference standard, LPA has been widely employed to identify the symptom characteristics and to further calculate and determine optimal cut-off points of assessment instruments (54)(55)(56). LPA is a person-centered statistical method that employs latent profile model (LPM) to divide population into multiple profiles, and it focuses on identifying latent subpopulations within a population based on a set of continuous variables (57)(58)(59). Despite the possible arbitrariness for LPA in determining the number of class members due to its semi-subjective properties, the misclassification rate is relatively low, and it could produce more reasonable results compared with some other classification approaches (60)(61)(62). Generally, in LPA, individuals assigned to the latent profile that represents the lowest level of symptoms or risks are regarded as "non-cases, " and others are considered "cases" (56). Hence, LPA was conducted to identify the characteristics of perceived COVID-19 stigma among RD. Robust maximum likelihood (MLR) estimation was employed to estimate the parameters. The Lo-Mendell-Rubin (LMR) and the bootstrap likelihood ratio test (BLRT) were performed to compare the model fit improvement between models with k classes and k-1 classes, significant p values indicated a better model fit with k classes. The optimal number of classes was evaluated by the entropy, Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), the adjusted Bayesian Information Criterion (aBIC), and the interpretability and definition of classifications, where an entropy value≥0.80 represented adequate quality of classification, lower AIC, BIC, and aBIC values indicate better model fit, and the "turning point" of the scree plot for the aBIC could suggest an appropriate number of classes.
After the selection of optimal model and definition of classifications, Chi-square began with the full set of demographic and clinical characteristics, PTSD, anxiety, depression, sleep disorder, fatigue, resilience, social support, and peace of mind, to evaluate their associations with different characteristics of perceived COVID-19 stigma. Statistically significant variables (p ≤ 0.20) in the univariate analysis were further used for stepwise multinomial logistic regression analysis. Adjusted odds ratio (AOR) and the corresponding 95% confidence intervals (95% CI) were calculated to assess the regression model results.
Receiver operating characteristic (ROC) analysis was conducted to determine the optimal cut-off value for the CSS-S. The area under the ROC curve (AUC), sensitivity, specificity, and Youden's index value were employed to evaluate the performance of classifiers, and Youden's index value was used to identify the optimal cut-off value. SAS9.4 and Mplus8.3 were utilized to conduct all the analyses with level of significance determined at a 0.05 value of p.
---
Results
---
Demographic characteristics
Among the 1,541 people who finished the survey questions, 1,297 questionnaires were enrolled in the data analysis. As illustrated in Table 1, over half of the participants were male (n = 563, 56.6%) and were less than or equal to 60 years old (n = 683, 52.7%). The majority of the participants were from urban areas (n = 1,136, 87.6%) and married (n = 1,105, 85.2%). Most of the participants had an income for 2020 less than 60,000 China Yuan(CNY, 1 CNY equals 0.14 USD on 2022.12.31; n = 805, 62.1%), and had an education level as senior high school or below (n = 921, 71%). A small percentage of participants lived alone (n = 158, 12.2%), used alcohol no less than 2 times per week (n = 117, 9%), and were current smokers (n = 161, 12.4%). The COVID-19 patients were clinically classified into four categories: asymptomatic (n = 60, 4.6%), mild (n = 927, 71.5%), moderate (n = 132, 10.2%), critically severe (n = 178, 13.7%). A significant proportion of the participants had no experience at ICU (n = 1,250, 96.4%), had never received psychological or emotional counseling during hospitalization (n = 1,225, 94.4%), and had never received psychological or emotional counseling before infection (n = 1,169, 90.1%). Just under a half of participants stayed over 20 days in hospital (n = 611, 47.1%), and had no complication (n = 530, 40.9%). Most of the patients perceived good (n = 736, 56.7%) or moderate (n = 247, 19%) mental health status during hospitalization.
---
Stigma and related psychological factors
The 12-item CSS-S's total scores range from 12 to 48 with higher scores indicating a more stigmatizing attitude. The mean score in this study was 28.04 (SD = 7.33). The mean scores of fatigue, peace of mind, resilience, and social support were 6.38 (SD = 4.04), 24.70 (SD = 5.99), 56.82 (SD = 14.04), 14.25 (SD = 5.18), respectively. The prevalence of PTSD, anxiety, depression, and sleep disorder were 16.5, 28.8, 37.9, and 47.1%, respectively (Table 2).
---
Latent profile analysis
Latent profile models (LPA) with one-to-five-class solutions were specified, and the fit indices of the 5 models are displayed in Table 3. The entropies of all classifications were above 0.9. The LMR and BLRT test were all statistically significant. The AIC, BIC and aBIC decreased with the increase of class number, and the scree plot of aBIC flattened out after the 3-class model (see Figure 1). Taken together, considering the model fit, parsimoniousness, and interpretability of the classes, the 3-class model was selected as the optimal model for the current sample, the distribution and conditional means of items of CSS-S on each class in the 3-class model are illustrated in Figure 2 and Table 4. In the 3-class model, the average latent class probabilities for most likely latent class membership (0.978, 0.977, and 0.972) demonstrate reasonable classification and good distinction (see Table 5). Given the conditional means of items on each class, we define Class1 (n = 166, 12.8%) as "low perceived COVID-19 stigma" group, Class2 (n = 663,
---
Influencing factors of perceived COVID-19 stigma of RD
The result of univariate analysis showed that female (χ 2 = 21.999, p < 0.001), older age (χ 2 = 45.595, p < 0.001), being married (χ 2 = 4.401, p = 0.111), low family income (χ 2 = 23.261, p < 0.001), living with other people (χ 2 = 7.456, p = 0.024), low education level (χ 2 = 61.653, p < 0.001), having complication (χ 2 = 10.117, p = 0.006), perceiving worse mental health status during hospitalization (χ 2 = 48.489, p < 0.001), PTSD (χ 2 = 73.360, p < 0.001), anxiety (χ 2 = 74.878, p < 0.001), depression (χ 2 = 70.081, p < 0.001), sleep disorder (χ 2 = 70.875, p < 0.001), and fatigue (F = 21.220, p < 0.001) were positively associated with perceived COVID-19 stigma, while resilience (F = 22.030, p < 0.001), social support (F = 25.070, p < 0.001), and peace of mind (F = 39.130, p < 0.001) were negatively associated with perceived COVID-19 stigma among RD (see Table 6). These variables were further employed in stepwise multinomial logistic regression analysis with the "low perceived COVID-19 stigma" group as a reference. The result of stepwise multinomial logistic regression analysis showed that older age (AOR = 1.753, p = 0.004), living with other people (AOR = 2.152, p = 0.003), anxiety (AOR = 2.444, p = 0.004), and sleep disorder (AOR = 1.921, p = 0.002) were positively associated with moderate perceived COVID-19 stigma, while higher educational level (AOR = 0.624, p = 0.012) was negatively associated with moderate perceived COVID-19 stigma; Female (AOR = 1.674, p = 0.011), older age (AOR = 3.046, p < 0.001), living with other people (AOR = 2.037, p = 0.011), anxiety (AOR = 2.813, p = 0.001), and sleep disorder (AOR = 2.628, p < 0.001) were positively associated with severe perceived COVID-19 stigma, while higher educational level (AOR = 0.340, p < 0.001), social support (AOR = 0.953, p = 0.021), and peace of mind (AOR = 0.951, p = 0.008) were negatively associated with severe perceived COVID-19 stigma among RD (Table 7).
---
Receiver operating characteristic analysis
To identify the optimal cut-off value of CSS-S for screening perceived COVID-19 stigma among RD, participants assigned to the "low perceived COVID-19 stigma" group in LPA were defined Scree plot of change trend of adjusted Bayesian Information Criterion (aBIC).
as "non-cases" (i.e., no stigma), and those assigned in "moderate perceived COVID-19 stigma" and "severe perceived COVID-19 stigma" groups were defined as "cases" (i.e., probable stigma). The ROC curve was then plotted for the total score of CSS-S using the binary outcome, with an AUC value of 99.96% (p < 0.001), indicating a good predictive capacity for perceived COVID-19 stigma (see Figure 3). The diagnostic criteria and indices are illustrated in Table 8. The optimal cut-off value was ≥ 20, where the sensitivity, specificity, and Youden's index value were 0.996, 0.982, and 0.978, respectively.
---
Discussion
The cross-sectional study employs LPA to assess the characteristics of perceived COVID-19 stigma among RD and analyzes its psychosocio contributing factors. Perceived stigma of RD was divided into three categories in this study. We measured the demographic characteristics and some possible psychological predictors of perceived COVID-19 stigma. Generally, older age, living with other people, anxiety, and sleep disorder were positively associated with moderate perceived COVID-19 stigma, while higher educational level was negatively associated with moderate perceived COVID-19 stigma; female, older age, living with other people, anxiety, and sleep disorder were positively associated with severe perceived COVID-19 stigma, while higher educational level, social support, and peace of mind were negatively associated with severe perceived COVID-19 stigma among RD. The cut-off point of the stigma scale was determined at 20 using ROC analysis.
This study classified COVID-19 RD into three groups according to the stigma level: "low perceived COVID-19 stigma, " "moderate perceived COVID-19 stigma, " and "severe perceived COVID-19 stigma" group. Only 12.8% of RD were categorized into the "low perceived COVID-19 stigma" group, which indicated the lowest levels of stigma and reported the lowest level of psychological risk factors. The majority belonged to the "moderate perceived COVID-19 stigma" (51.1%). Compared with the "low perceived COVID-19 stigma" group, anxiety and sleep disorder were positively associated with moderate perceived stigma. Similar to previously published studies, anxiety was a major risk factor for stigma. In a study that evaluated the depression and anxiety symptoms among 174 patients who recovered from symptomatic COVID-19 infection in Saudi Arabia, the stigma scores were significantly associated with higher scores on anxiety (63). Some other studies on people living with epilepsy, dementia, and cancer patients also demonstrated that anxiety is one of the psychosocial determinants of perceived stigma (64)(65)(66). Therefore, mitigating the anxiety symptoms is essential to decrease the stigma among RD. Emotional regulation, mindfulness, and experiential techniques are possible solutions to improve social anxiety disorder symptoms (67). RD could also try exercise, yoga, and meditation, which were proven to have modest positive effect on assisting their anxiety alleviation (68). Hospitals and communities should assess the anxiety level of COVID-19 RD to detect anxiety as early as possible. For RD (69). The society should be less hostile to RD. It is necessary for social media to refute false information, strengthen the information guidance of social media, and output positive information, so as to avoid the anxiety mood in origin.
Our study also found that sleep disorders is a determinant of moderate perceived stigma in RD. Previous studies showed that 29.5% of the COVID-19 hospitalized patients had sleep disorders (70). Poor sleep quality was associated with stigma (71). Cognitive behavior therapy is aimed at treating insomnia by avoiding behaviors and thoughts that might develop into sleep disorders (72). RD with sleep disorders could use this method on their own to improve their sleep quality. Effective programs based on the therapy could also be embedded in smartphones to assist their sleep promotion process (73). In addition, progressive muscular relaxation is an effective way to help COVID-19 patients feel less anxious and have better quality sleep (74). The receiver operating characteristic (ROC) curve of the CSS-S for screening perceived COVID-19 stigma. The "severe perceived COVID-19 stigma" group reported three more risk factors compared with "moderate perceived COVID-19 stigma" group, including female gender, insufficient social support and peace of mind. Female gender is a risk factor of "long-covid" syndrome and tend to have a higher proportion of physical and psychological symptoms than male (75). Because of the more severe illness and torment they suffered, they might find it difficult to maintain a good mentality toward the stigmatized attitudes. A low perceived level of social support prevailed during the pandemic due to the shutdown of many places, like schools, markets, and workplaces to avoid transmission of the virus (76). RD facing such conditions may arouse a sense of isolation and vulnerability, which would cause severe stigma. Perceived social support and use of adaptive coping strategies were found to affect individuals' psychological adjustment and resilience (77). Interventions like in-person interview, supportive psychotherapy, and positive attention would improve their social support and could be considered widely promoted (78). Peace of mind might increase one's self awareness and attitude toward the surroundings, and indirectly reduce the sense of being stigmatized. A previous study on female patients with schizophrenia also identified that enhancing peace of mind will help reduce stigma level (79).
Our study determined 20 as the cut-off score for CSS-S by LPA and ROC analysis, which may guide future epidemiological studies on COVID-19 stigma. The cut-off value is instructive for clinical practice in COVID-19 RD mental health promotion. Hospitals are suggested to collect stigma information of discharged patients and carry out relevant psychological intervention for patients whose scores exceed 20.
Although our team have analyzed the same population in advance and explored the prevalence and influencing factors of anxiety and depression in RD (80), a further analysis in this study provided insightful observations from a different perspective. This study enriched our knowledge on the association between mental health and perceived stigma among RD, and provided possible suggestions for the authorities and the society to reduce perceived COVID-19 stigma in the future. However, it has several limitations. First, this crosssectional study has its inherent limitations, for it contains no dimension of time to support a causal relationship. Second, the study was conducted more than 18 months after the COVID-19 patients were discharged, which may cause recall bias. Third, convenience sampling may decrease the representativeness of the population. Fourth, stigma contains two factors, namely "public stigma" and "selfperceived stigma. " In this study, we only mention the latter. Further studies should measure stigma more comprehensively in a representative sample.
---
Conclusion
This study provides an insightful result of the prevalence and influencing factors of perceived stigma among RD in Wuhan. Stigma among COVID-19 RD could be divided into 3 groups: "low perceived COVID-19 stigma, " "moderate perceived COVID-19 stigma, " and "severe perceived COVID-19 stigma" group. Based on the cut-off value we explored, the high proportion of perceived stigma level highlights the importance of solving the stigma and discrimination problem, for its impact on personal and community well-being. Therefore, it is essential to mitigate the psychological problems and reduce the perceived stigma level of RD as part of the response toward the COVID-19 pandemic. Psychological interventions on anxiety, sleep disorder, and social support are suggested to alleviate mental health problems and stigma among this population. Additionally, this study discovered the precise cut-off value for CSS-S, which provides a valuable tool for screening perceived stigma among future COVID-19 patients and can be used to identify the patients in noosed of tailored interventions.
---
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
---
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Review Board of the Institute of Pathogen Biology, Chinese Academy of Medical Sciences (IPB-2020-22), and the Research Ethics Committee of the hospital (2021001, 2021028). The patients/participants provided their written informed consent to participate in this study.
---
Author contributions
ZD and YW prepared the first draft and analyzed the data. XS provided overall guidance and managed the overall project. WX, HW, YH, MS, JF, XC, MJ, ZL, DC, and WM were responsible for the questionnaire survey and data management. YW, ZD, and XS prepared and finalized the manuscript on the basis of comments from other authors. All authors contributed to the article and approved the submitted version.
---
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
---
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. |
This study purposed to examine social capital in economic resilience in Bantul Regency during the pandemic. | Introduction
The emergence of Covid-19 has a highly complex impact on human life. It takes a quite long time to find a vaccine for the viral system that attacks the immune and respiratory systems. The virus eventually becomes a pandemic. The impact changes the entire order of human life, includes the economic itself.
In the early time of the virus emergence, the world economic system begins collapsed. Almost none of country's economic system survives due to the effects of the pandemic. The International Monetary Fund (IMF) has recorded a global economic loss with approximately 12 trillion US dollars (around 168,000 trillion rupiah) (Redaksi WE Online, 2020). In addition, Covid-19 caused a decreased economic growth rate up to -6% as well as global unemployment rate that reached 7-9% (Redaksi WE Online, 2020). This illustrates how complex and large is the impact of the presence of the virus.
The outbreak of the pandemic has caused great impact to the world economy, affecting micro firms, small and medium enterprises (SMEs), and large companies (Ahmed et al., 2021;Chen & Wei, 2022;Wulandari, 2020). The pandemic has disrupted the original study and job search plan arrangement of college graduates, and the employment difficulty of college graduates has been aggravated (Wang, 2021). The garment industries in Bangladesh have also been affected, leading to social and economic losses (Ahmed et al., 2021). The pandemic has caused an unprecedented negative impact on people's lives and the world's economy, including the stock market (Sun et al., 2021).
As a developing country, Indonesia also experienced the impact of the virus originated from Wuhan, China. To minimize the spread of the virus, Indonesia implemented several policies, such as limiting the entrance access of foreign tourists, a large-scale social restriction (PSBB), providing incentives/subsidies, and other various economic, political, and sociocultural policies.The General Director of Taxes at the Ministry of Finance noted that there were 3 major impacts of Covid-19 in Indonesia, namely the declining purchasing power of the people; uncertainty over the end of the pandemic so as to weaken investment in business and business activities; and the global impact of the pandemic caused import-export commodity prices to fall (Zuraya, 2020).
The Covid-19 has impacted all aspects of economic life, especially tourism, trade and investment (Sumarni, 2020). However, the problem (obstacle) in the economic sector is related to crowd, such as tourism, event, exhibition, and malls); the supporting business, such as mass transportation, ticketing, hotels, seasonal trade/souvenirs, etc; businesses that cannot implement physical/social distancing (such as salon, barbershop, motorcycle taxis, spas, children's games, house cleaning services, etc), a tertiary product business whose sales depend on public savings (such as property, personal vehicles, body care, hobbies, etc) as well as supporting businesses (such as leasing and other lending institutions) (Hadiwardoyo, 2020).
Various efforts have been made to minimize the impact of the Covid-19 pandemic in the sector of economics. The government must work hard to restore the economy. Various economic recovery policies carried out by the Indonesian government include the provision of social assistance to the poor-men, tax relaxation, and other policies.
The global and widespread impact of Covid-19 requires collaboration between public involvement and service providers (Desalegn et al., 2021). The government cannot work alone. The facts prove that none of countries is ready and able to anticipate the impact of the Covid-19 pandemic. Also, the community cannot be allowed to survive by himself without the support of government policies. In this situation, collaboration and cooperation between civil society and government become a necessity.
People's behavior or attitude is the key to this Pandemic (Drury et al., 2021). It means that collaboration between the government and the community is required to minimize the impact of the pandemic. The government cannot rely solely on structural policies. Besides, the government needs to accept and develop policies culturally in nature (based on living and existing values in society). His affirmation is that social capital-based policies need to be pursued and developed in building national economic resilience. Value, culture, motivation, form of cooperation networks in the community must be optimized to facilitate and assist government in overcoming the complex impacts of the pandemic.
The implementation and development of policies based on social capital in building national resilience becomes the focus of this study. The relationship between social capital and economic policy is quite relevant. At least, policies based on social capital in the community can ease people to believe and accept these policies (Rothstein, 2003). Social capital can also be a source of social clarity and optimism in dealing with the complex and prolonged impact of the pandemic (Mutiara et al., 2020).
We conducted a study in Bantul Regency, Special Region of Yogyakarta wiith an area of 506,85 km 2 locates between 07º44'04"-08º00'27" South Latitude and 110º12'34"-110º31'08" East Longitude (Bappeda Kabupaten Bantul, 2017). Bantul Regency is one of the regencies in the Special Region of Yogyakarta. Similar to other regions, this regency also experiences the impact of the Covid-19 pandemic. To anticipate the impact of the pandemic, the government in Bantul issued a Regent's Instruction Number 31/INSTR/2021 regarding policies on trade activities. The trading activity stipulated by Regent's Instruction (Inbup) include traditional markets, supermarkets, grocery stores, culinary centres, food stalls, restaurants, cafes, street vendors, hawker stalls, pharmacies, drug stores, etc. Based Regent's Instruction concerning trading activity is valid from October 19 to 1 November 2021, the day-time for traditional market was restricted to 06.00 pm, while the night-time was restricted to 09.00 pm with a maximum capacity of 75%. Similarly, supermarkets, grocery stores and other similar markets that sold daily necessities were also restricted to 09.00 pm with a maximum capacity of 75% (B and DP).
The impact felt by the community was a decrease in income due to restrictions on community activities and market operations. It was known that 90% of Bantul's economic support was Small and Medium Enterprises (MSMEs) (B). Therefore, this study examined how social capital-based national resilience policies/efforts were developed in Bantul Regency during the Covid-19 pandemic.
---
Method
This study was descriptive qualitative research with a case study approach conducted around August to October 2021. Qualitative research begins by using assumptions and theoretical framework that forms on a social problem (Creswell, 2015). Data will be collected through semi-structured interviews with small business owners, community leaders, and government officials in Bantul Regency. The sample will be selected through purposive sampling, and data saturation will be used as a criterion for sample size determination. Data analysis will involve coding and theme development to identify patterns and themes in the data. We collected data through interview and documentation. The validity of the data was checked using source triangulation. Source triangulation uses different information data. Therefore, we compared the results of interviews on research subjects and compared the results of interviews with the contents of documents related to the research to make a concluding idea (Sugiyono, 2011) Results and Discussion
---
Social capital
Social capital is an important concern in every empowerment and problem solving in the community. It is essential when social capital becomes popular and well-known in social studies (Wu, 2021). Community empowerment based on social capital is an important key in overcoming all problems in the community. The important concept of social capital stems from the assumption that it is impossible for individuals to deal with their problems, because individuals are socially powerless (Hanifan, 1916). In addition, social capital is also considered as an alternative to the failure of development viewed from the point of view of economic development (Saefulrahman, 2015). Therefore, individuals must share a network, cooperate, and help one another. This concept is called as social capital (Syahra, 2003). Simply, social capital affirms that relationship is essential (Andriani & Christoforou, 2016). Social capital is not capital in general economic terms. Many experts had attempted to decipher the definition of the term. Conceptual and theoretical ambiguity and confusion are found in social capital (Durlauf & Marcel Fafchamps, 2005). According to Hanifan (1916), social capital is highly broad in the form of will, fellowship, mutual sympathy, and social interaction of individual and family groups that make up social communities. The affirmation is that everything needed and is constructive in building and strengthening social relations or communities is categorized as social capital.
Fuyukuma also emphasized that social capital can be defined simply as a set of informal values or norms that are shared among group members that enable them to cooperate with one another (Fukuyama, 2000). The difference is that Fukuyama emphasizes the importance of trust in strengthening social capital. For him, trust is a "lubricant", so that the organization can run well (Fukuyama, 2000).
In line with previous opinion, Bhandari and Yasunobu (2009) argue that the term 'social capital' is highly complex and broad. Social capital is a collective asset in the form of norms, values, beliefs, trust, networks, social relationships, and shared institutions that facilitate cooperation and collective action for mutual benefit. However, social capital (alike capital) will only be useful and beneficial if it is applied in social relation and interaction. Otherwise, social capital will not have any impact on the community. Social capital is essentially neutral and depends on community members in the community. Therefore, social capital requires human capital (Hemingway, 2006). Although social capital is considered good and constructive, social capital can also be destructive and not useful (Fukuyama, 2000). Not even a few experts doubt about the meaning and role of social capital (Andriani & Christoforou, 2016).
Many studies have proven that social capital is positive in various people's lives. In the field of crime, social capital is considered to have an impact on crime rates (Buonanno et al., 2009). Likewise in the economy, social capital is considered to contribute to investing less in-cash and more in inventory, using more checks, having higher access to institutional credit, and reducing the use of informal credit (Gide, 1967).
Beside that, social capital has been impacted by the pandemic in various ways. Studies have shown that community decision-making during the pandemic is influenced by social capital, including trust, social norms, and social networks (Prayitno et al., 2022). The use of information and communication technologies (ICTs) has also been effective in maintaining social capital at the individual level during the pandemic (Bagdasaryan, 2021). Governments have mobilized social capital to support their efforts during the pandemic, but this does not always lead to positive resource mobilization (Hanani et al., 2021). The pandemic has also reshaped social capital, with an increase in emotional intensity and length of conversations, but a decrease in the frequency of meeting alters (Dávid et al., 2023). Corporate benevolence has been found to have a positive impact on social capital during the pandemic, with companies with larger firm size, higher leverage, higher institutional ownership, and higher ESG rankings more likely to donate COVID relief (Filbeck et al., 2022).
The utilization of social capital in community development or empowerment is important. In dealing with the impact of the Covid-19 pandemic, the use of social capital is urgently needed, so that the impacts of the pandemic can be minimized, and the government's workload can be assisted, especially in building national resilience. This is because social capital serves as a useful framework for successful development and policy formulation (Fathy, 2019), and it is closely related to poverty reduction (Kharisma et al., 2020). In the case of pandemic, a study of Ronnerstrand (2014) concluded that social capital is strongly correlated with the acceptance of A immunization (H1N1).
---
Social capital-based economic resilience
Bantul has local wisdom values as social capital that can be developed. According to Samsuri, Yogyakarta has local wisdom in the context of economic resilience management, such as the philosophical value of Hamemayu Hayuning Bawana; the moral teachings of mustard, greget, sengguh, ora conceit; and the spirit of golong gilig (FGD, 2021).
Hamemayu Hayuning Bawanameans is the obligation to protect, maintain and foster the safety of world. It is also concerned with working for the community rather than fulfilling personal ambitions, while the moral teachings of mustard, greget, sengguh, ora micah mean concentration, enthusiasm, self-confidence, humility, and responsibility. In addition, Golong gilih is the spirit of unity between humans and their God as well as the fellow human beings (Jogjakarta, 2021).
The government of Bantul implements an economic resilience policy based on social capital. Normatively, to minimize the spread of the Covid-19 while maintaining the community's economy, the government issued a Regent Instruction Number 31/INSTR/2021 related to trade activity policy. Several policies are the exemptions of user fees for traders and a 50% reduction in user fees. The fund revolving policy is a program of borrowing funds from the government for traders in the community's market.
The visible social capital is about how the government builds trust, cooperation between the government (district and village government) and community organizations, as well as government and society. The involvement of community organizations is realized through collaboration between the government and the Association of Indonesian Market Traders (APPSI) for Bantul, Imogiri, and Angkruksari, and Semampir area. This form of cooperation is a form of trust (NLPI, Interview). Another collaboration with the community is cooperation between Bantul Regency Government and Gojek in increasing MSME income through digital payments (Wijana & Baktora, 2021).
The Association of Indonesian Market Traders (APPSI) in Bantul, Imogiri, and Angkruksari, and Semampir area played an important role in the success of public policy. APPSI acts as a mediator between the government and the community, especially market traders. APPSI facilitates traders through training programs or workshops, advocacy programs or complaints about any problems related to the market. Even APPSI attempts to find out an alternative capital assistance for traders who do not receive assistance from the government, the traders try to get this assistance by collecting certificates facilitated by APPSI. There were around 200 business certificates collected, then APPSI at Semampir market brought these certificates to the sub-district office (NKPS).
The value of social capital that appears in the national resilience policy in Bantul Regency is the high value of cooperation and trust between the government and the community. This is in the policy of providing a fund revolving assistance for market traders by the government. A fund revolving is given to traders worth 1 million for each trader with a return system. Traders with a loan have to pay in installments of 10 thousand rupiah per-day until it is paid off. "Overall, the traders are cooperative and responsible, on average all traders pay/return" (DP). Somehow, trust is one of the important indicators in the success of social capital as emphasized by Fukuyama.
In contrast to market traders, the Modern Franchise Networked Market and the Non-Franchise Networked Modern Market have received less attention from the government. There is almost no assistance or involvement from local governments for Alfamart business ventures or other types of modern franchised stores (NKT1, HCV2, HCV3, HCV4). The economic resilience of Bantul Regency was successful or stable even though at the end of 2019 to early 2020, where Covid-19 had spread in Indonesia. This fact can be seen from the poverty rate released by the Central Statistics Agency in 2021 (BK Bantul, 2021). Strictly speaking, this has not seen the economic data for 2021 which has entered the second semester of the period, where the impact of the Covid-19 is more affecting. However, the data released depicts Bantul's economic data for one-year where Covid-19 has spread in Indonesia with all its policies and impacts. With these data, the data on the depth of poverty between 2019 and 2020 was relatively stable. Qualitatively, there was a correlated influence between the policies of the Bantul government and the social capital base described previously.
Capital-based economic policies have long been emphasized, researched and considered to connect one another (Dufhues et al., 2011;Gide, 1967;Iyer et al., 2005;Kharisma et al., 2020). Local economic growth should benefit from binding and bridging social capital. Bonding social capital consists of closed-ties in small and cohesive communities (Wolleb, 2019). Policies based on social capital are constructive or positive as long as they are empowered. Social capital can be a solution, so that anxiety and social rifts do not occur due to the impact of the Covid-19 (Mutiara et al., 2020).
Policies based on social capital are relatively easy to accept and understand by the public. The implementation of policies that involve organizations or communities in the community as part of social capital is able to bridge and facilitate the objectives of the policy, because the values/organizations/collaborations involved in public policy are not new or strange to the community. This creates trust and acceptance from the community. Therefore, social capital in the community supports and helps the effectiveness of government performance. Tavis argues that social capital may be linked to government performance, because it increases the level of political sophistication and facilitates cooperation in society as well as helping people to better voice their policy demands (Tavits, 2006).
The involvement of APPSI within the policies against the Covid-19 in Bantul Regency is a form of utilizing social networks (social capital) in the community. This facilitates socialization as well as making it easier for the community to express their aspirations to the government. Thus, social capital affects the community's response to Covid-19 in the form of a facilitator as well as a leading point of policy compliance (Wu, 2021).
The development of policies or public policies based on social capital have long been pursued. At the same time, it is an alternative and a new breakthrough that development is not always viewed from an economic viewpoint. Social capital-based policies are considered as an alternative when economic development is only the best way. As evidence that not all of these economic developments were successful. Thus, it is important to create a new concept or approach based on social capital as an alternative (Saefulrahman, 2015).
The important emphasis of social capital is the community should be able and have a potency and opportunity to improve and empower themselves. This potency is actualized in the form of cooperation, norm, belief, and value that live and are maintained, because humans or society should always interact with the environment and their problems. Another awareness is that the individual is socially helpless (Hanifan, 1916), so that there is no choice but to build cooperation and shared-values to create social capital.
In the community, social capital is also potential and neutral. It actually requires the involvement of other parties for optimization. In Hemingway's terms, social capital also requires human capital (Hemingway, 2006). In strengthening human capital, the role of the government and community organizations in providing facilities and services is highly required. Otherwise, the existing social capital may not be lifted and become part of public policy. Even worse, social capital can also have a destructive impact (Fukuyama, 2000).
The role of the Bantul Regency government in optimizing cooperation or social networking is manifested in the 2013 Regent's Regulation concerning Modern Shop Business Permits. In the regulation, the establishment of modern shops must take a certain distance from traditional markets. In addition, modern shops are required to build partnerships with micro and small businesses in the form of marketing partnerships, provision of place of business, product acceptance, and equity participation (Peraturan Bupati Bantul Nomor 35 Tahun 2013 tentang Penyelenggaraan Izin Usaha Toko Modern, 2013).
---
Conclusion
Social capital plays an important role in the success of policies taken by Bantul Regency Government in dealing with the Covid-19. The policy taken is the Regent's Instruction Number 31/INSTR/2021 related to policies concerning trade activities, restrictions on social activities, market operating hours, and the provision of capital assistance to traders. The values or social capital involved in the economic resilience policy in Bantul Regency are the values of cooperation, trust, and the use of social networks or communities in the community, namely Indonesia Market Traders Association (APPSI), especially in Imogiri area, Semampir market, Bantul market, and Angkruksari market. APPSI acts as a facilitator and mediator between the community and the government in conveying aspirations, criticisms, and advices to the government. The success of this social capital-based policy can be seen in the public's compliance or acceptance of the policies implemented by the Bantul Regency government. Besides, it can also be identified through data on the depth of poverty in Bantul economy metrics which has relatively lasted one year of the Covid-19 pandemic (during 2019Covid-19 pandemic (during -2020) ) released by the Central Statistics Agency for Bantul 2021. Finally, this study merely investigated four markets in Bantul Regency, so that the research needs to be expanded, especially by using quantitative research approach to provide comprehensive data. |
While HIV disproportionately impacts homeless individuals, little is known about the prevalence of HIV risk behaviors in the southwest and how age factors and HIV risk perceptions influence sexual risk behaviors. We conducted a secondary data analysis (n = 460) on sexually active homeless adults from a cross-sectional study of participants (n = 610) recruited from homeless service locations, such as shelters and drop-in centers, in an understudied region of the southwest. Covariate-adjusted logistic regressions were used to assess the impact of age at homelessness onset, current age, age at first sex, and HIV risk perceptions on having condomless sex, new sexual partner(s), and multiple sexual partners (≥4 sexual partners) in the past 12 months. Individuals who first experienced homelessness by age 24 were significantly more likely to report condomless sex and multiple sexual partners in the past year than those who had a later onset of their first episode of homelessness. Individuals who were currently 24 years or younger were more likely to have had condomless sex, new sexual partners, and multiple sexual partners in the past 12 months than those who were 25 years or older. Those who had low perceived HIV risk had lower odds of all three sexual risk behaviors. Social service and healthcare providers should consider a younger age at homelessness onset when targeting HIV prevention services to youth experiencing homelessness. | Introduction
According to the National Alliance to End Homelessness (NAEH), over 500,000 people are homeless on any given night in the U.S. [1]. Other reporting strategies indicate that as many as 1.7-2.5 million youth under age 25 years are homeless or unstably housed on any given night in the U.S. [2][3][4]. Individuals who experience homelessness are at greater risk of acquiring or transmitting HIV compared with people in stable housing [5], with HIV prevalence rates among the homeless being nine times higher than in the general population [6]. Despite this, those who experience homelessness have dramatically limited access to HIV prevention programs, particularly pre-exposure prophylaxis (PrEP) and non-occupational post-exposure prophylaxis (nPEP), two promising biomedical HIV prevention strategies. Homeless youth are less likely to enroll in HIV PrEP trials with one study reporting that only 1% of HIV-negative participants were prescribed PrEP [7]. In PrEP trials, fewer young adults have enrolled than older adults [8] and uptake [7] and adherence were lower in those with unstable housing [9].
Homelessness is associated with several HIV risk behaviors, such as having condomless sex and multiple sexual partners, which contribute to disparities in sexual health outcomes. For example, condomless sex is a prevalent HIV risk behavior among individuals experiencing homelessness. In studies of homeless men, over two-thirds of participants reported having condomless sex in the past six months [10] and each additional sex partner has been found to be associated with an increase of more than two times the odds of engaging in condomless sex [11]. In a sample of both male and female sexually active homeless adults, approximately 76% reported having condomless sex [7]. Among younger homeless populations, 60% reported condomless sex during their last sexual encounter [12]. Among heterosexual homeless men, the strongest predictors of condom use were attitudes about condom use, self-efficacy for condom use, partner type (i.e., long-term or casual), and partner communication about condom use [13]. In addition, having multiple sexual partners and depression were found to be associated with consistent condom use [10]. Predictors of condom use among homeless women include condom efficacy (i.e., belief that condoms reduce risk) and their perceived risk of getting HIV. For example, women who believe they have low HIV susceptibility are less likely to have condomless sex [14]. Yet, in other homeless adult samples, HIV susceptibility, operationalized as worry about getting infected with HIV or AIDS, was not found to be significantly associated with condom use [11]. While researchers have identified several common predictors of condom use among various homeless subgroups, the relation between other factors such as HIV risk perceptions, current age, age at onset of sex and first period of homelessness remains unclear.
Other HIV risk behaviors such as having multiple sexual partners and concurrent sexual partners are prevalent among homeless populations, including ex-offenders and street-involved youth [15,16]. In a study of homeless men from Los Angeles [17], almost 40% reported multiple concurrent sex partners, i.e., having sex with more than one person around the same time, which is a known risk factor for HIV [18]. Finally, sexually active homeless individuals reported more sex partners in their lifetime and in the past 12 months with an unknown HIV serostatus partner compared with HIV+ housed adults [19].
Subgroups of homeless individuals report high rates of sexual risk behaviors including females, young adults under 25 years old, sexual minorities, and those with early sexual debut [20,21]. Homeless women are more likely than homeless men to engage in sexual risk behaviors, such as having condomless sex with casual partners and having multiple partners [22][23][24][25][26]. Homeless youth are 6-12 times more likely to become infected with HIV than housed youth, with prevalence rates as high as 13% [27,28]. Homeless youth have earlier sexual debut; are more likely to have multiple partners, trade sex for food, shelter, money, or substances, and use substances before sex; and are less likely to use a condom or contraception than stably housed youth [20,[29][30][31][32][33]. Lesbian, gay, bisexual, transgendered, and questioning (LGBTQ) homeless persons have a higher risk of HIV than heterosexual individuals. Among a sample of homeless LGBTQ young adults, 17% reported a diagnosis of HIV [33]. As well, LGBTQ homeless youth experience higher rates of sexually transmitted infections (STIs) than homeless heterosexual youth [34]. Finally, both males and females who had sex before age 13 years were more likely than non-early sexual initiators to have multiple sexual partners during their lifetime and to engage in condomless sex [35].
Multiple risk factors for engaging in sexual risk behaviors have been identified among homeless populations. However, less is known about differences in sexual risk behaviors according to current age (i.e., young adults vs. older adults) as many studies of homeless youth do not compare youth to older adults in the same sample. Less is known as well about the age of homelessness onset as a potential, independent risk factor despite findings suggesting that age of onset is correlated with other risk behaviors such as substance use, with a younger age at the onset of homelessness being associated with a higher incidence of substance use [36]. An Oakland, California based study found that, among homeless adults >50 years old, those who first became homeless before age 50 reported higher rates of substance use than those with a later age of homelessness onset [37]. While research demonstrates that age at the onset of homelessness is associated with substance use, little is known about the relation between age at the onset of homelessness and sexual risk behaviors in a broader sample of homeless adults and young adults.
---
Purpose
In this study, we examined the prevalence of sexual risk behaviors-having condomless sex, a new sexual partner, and multiple sexual partners-among sexually active homeless adults in Oklahoma City, OK, USA. In addition, we assessed the relations between HIV risk perception, age at sexual debut, age at the onset of homelessness, current age, and sexual risk behaviors. We hypothesized that those who were younger than 25 years and those who experienced homelessness at an earlier age (i.e., under 25 years old) would report more sexual risk behaviors and that those with low perceived HIV risk would report fewer sexual risk behaviors.
---
Methods
---
Data and Sample
From July-August 2016, participants were recruited through flyers posted at six different homeless shelters in Oklahoma City, OK, USA. To be enrolled, participants must have been 18 years of age or older, receiving services (e.g., shelter, counseling) at the targeted shelters, and had a minimum 7th grade English literacy level based on a score of 4 or higher on the Rapid Estimate of Adult Literacy in Medicine-Short Form (REALM-SF) [38]. After the screening and informed consent process, each participant was given a questionnaire to complete on a tablet computer, which enabled the participant to see survey items on the screen and hear the questions being read aloud via headphones. Participants took about 1 h to complete the survey and were compensated for their time with a $20 department store gift card. The Institutional Review Boards at the (the University of Texas Health Science Center, the University of Houston, and the University of Oklahoma Health Sciences Center approved this study.
A total of 648 participants were screened for inclusion in the parent study. Thirty-four participants were excluded because they did not meet the reading level criteria REALM literacy score. Four eligible people chose not to participate. A total of 610 adults were enrolled and completed study measures over the 12 day data collection period across 6 shelter sites. An additional 21 participants were not literally homeless and therefore were excluded from the analysis. Of the sexually active participants (n = 467), none had missing data on any of the sexual risk behavior measures (i.e., dependent variables). However, 3 participants were excluded for having missing data on independent variables and 4 additional participants were excluded for missing data on the covariates. The final analytic sample for this study included 460 homeless and sexually active participants.
The subsample of sexually active participants in this study differed from those excluded who were not currently sexually active. The sexually active participants were younger, minority race, and had less than a high school degree. The sample for this study also demonstrated more moderate to severe stress and were more often diagnosed with alcohol or substance use disorder compared to the not sexually active participants from the parent study.
---
Measures
---
Dependent Variables
Condomless sex. Participants were asked how often they had vaginal or anal sex without using a condom in the past 12 months. Participants who responded that they had sex without using a condom less than half of the time, about half of the time, not always but more than half the time, or always were coded as having condomless sex, while participants who responded that they never engage in sex without a condom were coded as always having sex with a condom.
New sexual partner. To determine if a participant had a new sexual partner, participants were asked, "Did you have any kind of sex with a person that you have never had sex with before in the past 12 months?" (yes/no).
Multiple sexual partners. Participants were asked to report the number of people they had any kind of sex with in the past 12 months. Responses were dichotomized to 4 or more partners or fewer than 4 to align with the literature on multiple sexual partners and allow for comparisons with other studies and populations [7,16].
Sexually Transmitted Infection (STI). Participants were asked if a health care professional had ever told them they had genital herpes, genital warts, Human papillomavirus or HPV, gonorrhea (sometimes called GC or clap), Chlamydia, or Syphilis.
---
Independent Variables
The independent variables included in the model represented sexual activity and HIV risk perceptions, homelessness and age characteristics, stress, and substance use.
HIV risk perceptions. Participants were asked to rate their perception of their HIV risk from 0 (No risk) to 5 (High risk) [39]. Participants who responded as somewhat at risk, moderate risk, or high risk were coded as high perceived risk, while participants who responded as low or no risk were coded as low perceived risk.
Age characteristics. Participants reported their age at their first sexual encounter. This variable was dichotomized to either <14 years or ≥14 years of age to align with sexual initiation surveys [40]. Participants reported their age at the onset of homelessness, which was dichotomized as <25 years or ≥25 years to align with the literature on homeless youth that often includes youth 25 years old and younger [41,42]. Participants also reported their current age, which was dichotomized as <25 years or ≥25 years.
---
Covariate Measures
The regression models included various characteristics that may influence participants' sexual risk behaviors, in addition to HIV risk perceptions and age characteristics. Covariates included sex (1 = female; 0 = male), marital status (1 = married; 0 = not married), education (1 = high school diploma or more; 0 = less than high school diploma), race/ethnicity (1 = minority race/ethnicity; 0 = white), history of sexually transmitted infections (1 = present; 0 = absent), and sexual orientation (1 = LGBTQ; 0 = heterosexual). The four-item Perceived Stress Scale was used to assess perceived stress [43]. Items were summed, and responses were grouped into two groups: moderate to severe stress (score of 9 or more) and low stress (score less than 9) [44,45]. Participants were also asked if they had ever received a diagnosis for alcohol or substance use disorder (yes/no).
---
Analytic Plan
Descriptive and logistic regression analyses were conducted using STATA version 24.0 statistical software (StataCorp. LP, College Station, TX, USA). Bivariate analyses assessing the relation between sexual risk behaviors and individual factors were conducted using chi-square tests. Three covariate-adjusted logistic regression models were conducted, one for each sexual risk behavior (i.e., condomless sex, new sexual partners, and more than 4 sexual partners in the past 12 months). Standard errors were adjusted to account for the lack of independence of observations, as participants were clustered within the shelter location where they completed the survey [46].
---
Results
---
Sample Characteristics
Participants (n = 460) were predominantly ≥25 years old (94%), male (61%), single (88%), white (60%), and heterosexual (92%), and most had at least a high school diploma (76%) (Table 1). This sample approximates the age and sex of the homeless population surveyed in the Oklahoma City Point-in-Time count in 2016 [47].
Regarding the prevalence of sexual risk behaviors in the past 12 months, 53% of the participants reported engaging in condomless sex, 35% had a new sexual partner, and 12% had multiple sexual partners. Eighty-nine percent of participants perceived themselves to be at low risk for HIV. Seventy-one percent of the participants initiated sexual activity at 14 years or older (X = 14.80 ± 3.67 years). Most participants (70%) were ≥25 years old at the onset of homelessness (X = 32.6 ± 13.19 years) and had been homeless for an average of 1.71 ± 2.64 years (median = 9.06 months). Additionally, 22% report a positive sexually transmitted infection history.
The results of the bivariate analyses are shown in Table 1. Having low perceived HIV risk was independently associated with lower prevalence of all three sexual risk behaviors. An early sexual debut (i.e., initiating sexual activity at 13 years or younger) was associated with having condomless sex but not with having new or multiple sexual partners in the past 12 months. Experiencing first homelessness at 24 years or younger and current age of 24 years old or younger were associated with greater odds of having condomless sex, a new sexual partner, and multiple sexual partners in the past 12 months. Early sexual debut was also associated with having low perceived HIV risk, being female, being married, having less than a high school diploma, and being a racial/ethnic minority. Homelessness onset at <25 years old was associated with having low perceived HIV risk, currently being 24 years or younger, having completed less than a high school diploma, and being a racial/ethnic minority. Currently being younger than 25 years old was associated with being LGBTQ. No significant associations were found between the age characteristics and perceived stress or alcohol and substance use disorder diagnoses. 47) 36% (10) HIV = Human Immunodeficiency Virus; LGBTQ = lesbian, gay, bisexual, transgender, questioning. Significance of chi-square tests between each sexual risky behaviors and each characteristic is denoted by: * p < 0.05; ** p < 0.01; *** p < 0.001.
---
Regression Analyses
Predictors of Condomless Sex. In the covariate-adjusted models, low perception of HIV risk was associated with 73% lower odds of engaging in condomless sex in the past 12 months (Table 2). Being female, married, and having a sexually transmitted infection were respectively associated with a 110%, 283%, and 107% higher odds of having condomless sex. Predictors of Having a New Sexual Partner. Low perception of HIV risk was associated with 71% lower odds of having a new sexual partner in the past 12 months. Being younger than 25 years was significantly associated with 265% higher odds of having a new sexual partner. Being married was also associated with 55% lower odds of having a new sexual partner while having a diagnosis of alcohol or substance use disorder was found to be associated with a 49% higher odds of having a new sexual partner in the past 12 months.
Predictors of Having Multiple Sexual Partners. Low perception of HIV risk was associated with 77% lower odds of having multiple sexual partners in the past 12 months. Being younger than 25 years was significantly associated with 108% higher odds of having multiple sexual partners in the past 12 months. Experiencing homelessness before 25 years old was associated with 80% higher odds of having multiple sexual partners in past 12 months. Additionally, identifying as LGBTQ was associated with 123% higher odds of having multiple sexual partners in the past 12 months, while a diagnosis of alcohol or substance use disorder was found to be associated with a 32% lower odds of having multiple sexual partners in the past 12 months.
---
Discussion
Among a cross-sectional sample of sexually active homeless adults from an understudied region in the southwestern United States, we examined the prevalence of recent sexual risk behaviors, including having condomless sex, new sexual partners, and having multiple sexual partners in the past year. We found lower rates of condomless sex than have been found in other studies of homeless adults [7,10]. We also found that the proportion of individuals who had condomless sex, new sexual partners, and multiple sexual partners in the past 12 months was higher among those younger than 25 years old than among those 25 years and older. This aligns with the literature that homeless youth have significant sexual health risks and high prevalence of sexual risk behaviors [19,[29][30][31][32]. Findings from this study add to the literature on sexual risk behaviors among homeless populations by comparing homeless youth and older adults within the same sample and suggesting that the age at the onset of homelessness is an important, independent risk factor to consider when addressing sexual risk behaviors, even after controlling for current age.
In contrast to other studies that have found no significant association with perceived HIV risk and sexual risk behaviors [11], in this sample, we found that having low HIV risk perception was associated with significantly reduced risk of having condomless sex, a new sexual partner, and multiple partners in the past year. This may be related to the participant's ability to accurately conduct a self-assessment of their HIV risk behaviors, i.e., those with low risk behaviors conclude they are at low risk for HIV and therefore report low HIV risk perceptions.
Sex, marital status, ever having an STI, sexual orientation, and alcohol/substance use disorders were found to be significantly related to sexual risk behaviors. Females reported more condomless sex, which aligns with the literature among homeless populations [22,33]. However, in contrast to the literature, fewer females reported having multiple sexual partners [22][23][24]26]. As expected, being married was found to be significantly associated with reduced odds of new sexual partners and increased odds of having condomless sex. This aligns with the literature that has revealed lower condom use among steady sexual partners and may be related to low HIV risk perception among those having sex within established relationships [48]. Those reporting ever having an STI had increased odds of condomless sex and those who identify as LGBTQ had increased odds of having multiple sexual partners. We also found that while having a diagnosed alcohol or substance use disorder increased the odds of reporting a new sexual partner in the past 12 months, it decreased the odds of having multiple sexual partners. This suggests that it may be important to also screen for substance use and assist with access to substance abuse treatment in conjunction with HIV risk prevention counseling, particularly among homeless persons with early homelessness onset.
Study strengths include the novel analysis of age at the onset of homelessness and current age as potential factors contributing to sexual risk behaviors among a large sample of both younger and older homeless adults. There are several limitations of this study. The sample was collected in one understudied southwestern region of the United States; therefore, the results may not be generalizable to other homeless adult populations. This sample was predominantly ≥25 years old, white, and heterosexual which does not approximate the demographics of many other homeless populations. This study is also limited by reliance upon self-report data and the cross-sectional design. Thus, we can interpret associations but not causation from these data. The sample represents only homeless adults interfacing with shelter services. Because this study was not designed to include street outreach to homeless adults disconnected from service providers, the sexual health risk factors may be an underrepresentation of all homeless adults, both connected and disconnected from service providers such as shelters and drop-in centers. The lower prevalence of sexual risk behaviors may also reflect the older age of the sample, as homeless youth report higher prevalence of sexual risk behaviors [9]. As well, multiple analyses were conducted which increases the risk type 1 errors. Finally, participants recruited for this study were from homeless adult vs. homeless youth serving organizations.
---
Conclusions
Current age and age at the onset of homelessness should be considered in planning sexual health programs and HIV/STI prevention interventions among homeless populations. Social service and healthcare providers should consider screening homeless adults for the age at the onset of homelessness as a gauge for sexual risk behaviors. Youth experiencing homelessness and those who experience early homelessness engage in more HIV risk behaviors than older homeless individuals and those who first experience homelessness later in their adult life. To effectively scale-up HIV prevention among homeless youth, health and social systems should increase access to free HIV prevention counseling that includes linkages to care and patient navigation, refs promoting condom use and HIV prevention strategies including PrEP and nPEP, and providing free condoms, HIV/STI screening, and treatment in locations that are easily accessible by public transportation.
---
Author Contributions: All authors conceived and designed the survey instrument; Michael S. Businelle and Darla E. Kendzor collected the data; Diane Santa Maria, Daphne C. Hernandez, and Katherine R. Arlinghaus analyzed the data; all authors contributed to the writing of the paper.
---
Conflicts of Interest:
The authors declare no conflict of interest. |
In English). TikTok is a new social media site that offers a lot of features to let its users create expressive content on their own. Most of the users on this platform create indecent video content about dancing excessively and other immoral acts, particularly by women. This paper, therefore, aims to discuss the behavior of some unmarried Muslim women on TikTok in Northern Nigeria. It was observed that the attitudes of some unmarried Muslim women on social media, especially on TikTok, are unbecoming and go against the moral teachings of Islam as it relates to social engagement. The researchers also discovered that most of the women are involved in indecent behaviors on the TikTok platform, where some of these unmarried Muslim females either in the name of becoming celebrities or advertising a brand for the public to see via videos that were uploaded to the platform thereby exposing their bodies to those who are not their Mahram to see. This study was conducted using a survey method, and data were collected through in-depth interviews. The researchers, therefore, recommend that Islamic Da'wah is the only way forward to curtail such unethical behaviors among Muslims in Northern Nigeria. The study also recommended that there was an urgent need for Muslim scholars in Northern Nigeria and beyond, to intensify efforts through preaches in Masjid, schools, and other public gatherings on the menace of TikTok on Muslim Ummah most importantly the Youth. |
where social media is very useful for everyone to communicate and spread information to the public. The social media that is very popular with people of all ages is TikTok. TikTok has many uses, ranging from creating video content to spreading information or news, even TikTok can be a place to make money if we create content that many people like. 3 However, TikTok can also harm its users if it is not used wisely, for example, TikTok is used by its users, especially teenagers to judge content if they don't like it, they will even give a negative comment on the content which will have an impact on basic ethics. Therefore, the researcher wrote this article to discuss the consequences of using TikTok for in the contemporary generation which will affect their ethics in real life. 4 And with the development of the times, it makes it easier in various fields, especially in modern communication. With these advances in modern technology, making developments in the communication arena continue to grow, one of which is the emergence of the Android device, which facilitates and accelerates both work and remote communication. Then came various new applications, such as social media networking platforms. 5 TikTok is a mobile app that makes it easy to create short videos It doesn't even require a lot of equipment the only is Mobile support. These videos made from It doesn't even need editing. In August 2018 Music was added to this app and it started to be used all over the world. Chinese video sharing is a social networking service owned by Byte Has a company called Dennis ISAAC. 6 TikTok is a social media that has millions of users around the world. Historically, TikTok was created by Zhang Yiming in September 2016. It allows users to create videos creatively with backsong and filter options. TikTok is a social media platform that allows users to create short videos easily and its idea derives from previous apps named Douyin. 7 Currently, TikTok has become a common social media platform used by youth around the world. Before now, TikTok was only used by persons who wanted to exist in the cyber world and listen to music. But recently, TikTok has been commonly used by persons either for business, politics, or by those who want to be celebrities. The existence of TikTok as a popular social media is influenced by the global pandemic.
TikTok is a short video on social media that has both positive and negative values. 8 One of the negative impacts that parents must watch out for on their children is pornographic content or the use of sexy or impolite clothing. 9 For example, in mid-February 2020, the TikTok video went viral because of the immoral scene of a couple of teenagers doing scenes like husband and wife which was recorded by a colleague who was dancing without realizing it. 10 This incident is one of many cases, so it is necessary to supervise children and teenagers in using smartphones, especially the TikTok application.
The instant gratification and "viral" hit to a TikTok user's video are what has allowed the app to continue its popularity. Teens look to this app as a source of external validation and rely heavily on its use to provide what they believe is total happiness. 11 Social media has become an important need for the community, this fast and advanced era make people complacent with the ease and convenience of social media. Currently, TikTok social media video-sharing application is available on smartphones. Social media TikTok is an online media, with its users can easily participate, share, and create content, including blogs, social networks, wikis, forums, and the virtual world. 12 The use of TikTok by the contemporary generation can develop meaning and self-awareness due to continuous social interaction between users. 13 During the pandemic, children experienced moral degradation which was influenced by the use of gadgets, one of which was the Tiktok application which contained various content that became a problem if the content viewed or imitated did not reflect something that could be seen and imitated which hurt morals of youth. A lot of young adults use the TikTok application which harms them and those nearby. 14 The social media TikTok is a medium for self-expression, providing entertainment, and information, enlarging social networking, and developing the creativity of its users. However, TikTok is frequently showing behavior contrary to Islamic religious values only to be popular. Phenomena on TikTok such as dancing and swaying by showing some body parts that identify as pornography. as reported by online media Kompas.com on 24 February 2020 with the title"Fakta Video Tik Tok Berlatar Adegan Mesum, Pelaku Umur 14 Tahun, Terlibat Prostitusi Online" 15 Islam plays a central role in the lives of Muslims, even in their usage of social media. The lives of Muslims ought to be holistically guided by the Islamic principle of social engagement. 16 Allah has spoken of the believers about upholding the teachings of Islam, Allah says:
Those that turn (to Allah. In repentance, that serve him, and praise him; that wander in devotion to the cause of Allah: that bow down and prostrate themselves in prayer; that enjoin good and forbid evil; and observe the limit set by
Allah. -(These do rejoice). so, proclaim the glad tidings to the believers. 17 This marks the importance of using social media ethically. Other than social media usage, the TikTok ecosystem needs to be conducted in an ethical manner for ethical usage of the social media platform to take place effectively. This is because behavior that is not by norms and religious values that can harm the morals of TikTok users are found in the platform, therefore it is important to understand morals. Morals is defined as behaviors possessed by humans, both praiseworthy morals (akhlakul karimah) and despicable (akhlakul madzmumah). 18 Considering some social phenomena on TikTok usage in the community, then this study aims to overview social media TikTok in Islamic perspective.
---
LITERATURE REVIEW
Explained that the TikTok application is an application that can make users create a video that lasts approximately 30 seconds to 3 minutes with different music. The TikTok application is a social networking site used in the system for uploading a video by the application user, which is then given to other users. TikTok is the most prominent and trendy application among young people. 19 Tiktok is now used to foster self-confidence and become a place of popularity and self-presence that causes other people's interest. The will is identical to someone who has narcissistic behavior.
The influence of the use of social media itself is very diverse, with both positive and negative impacts. Selfish behavior is acculturation showing itself excessively. Narcissism means a person's willingness to show that he is superior by feeling that he has potential that exceeds others to get more attention and praise. 20 The existence of this TikTok someone cannot be separated from selfish behavior, which can ultimately become arrogant self-perception.
The form of TikTok application is now used excessively, which causes narcissism among youths, particularly females. Ironically, some of these females apply this by showing themselves revealing their genitals when swaying by wearing sexy clothes so that the behavior attracts the attention of others who are watching. 21 Explained that Islam has regulated human life as well as possible. 22 From the point of view of Islam, the Glorious Qur'an is a source of law and a source of knowledge filled with lessons, wisdom, and examples on how a believer is to his life most importantly as it relates to social engagements. 23 Tiktokers are individuals or people who carry out activities to create unique and exciting video content on the TikTok application that makes the user known and therefore has many followers because of the exciting and inspiring content created by these individuals. At the transitional age, adolescents have begun to 17 Qur'an Surah At-Taubah 9:112. 18 have specific interests, such as interest in self-appearance. Adolescents try to be able to look as attractive as possible to get recognition and attractiveness. 24 TikTok also allows users to create short music videos.
A study by Ridgway & Clayton looks at social media from a different perspective and focuses on the direct effects of social media use on body image as it affects women folk. This study examined social media platform users that promoted their body image for satisfaction either through selfie posts, encountering the risk of social media-related conflicts and negative romantic relationship consequences. 25 Another research with a more direct effect explored the impacts of social media on the body image of youth. The results demonstrated that while using social media, the youth particularly female felt pressured to lose weight, look more attractive or muscular, and change their appearance to become attractive. 26 According to Ajao, Bhowmik, & Zargari, social media is being used to spread false news and other immoral acts. It was established that Twitter, Instagram, Facebook, TikTok, LinkedIn, Snapchat, etc., sites that have a presence in Muslim societies are also useless and are very much filled with dirtiness. These sites are being misused by a lot of blogs to spread unwanted behaviors to the cyber.27
---
METHOD
The methodological survey was utilized in the conduct of this research. There are a variety of ways to collect data for survey-based research, the most popular of which are interviews and questionnaires. However, the primary data used for research is obtained through the interview methods. Finding and gathering reference materials that are relevant to this research is the first of three processes the researchers adopted when putting this piece together. Secondly, several interviews have been conducted, analyzed, and elaborated to fully understand the intersections of this essay. Thirdly, the researchers conclude the research by giving a highlight and the outcome of the research for further study.
---
RESULT AND DISCUSSION
---
A. Islam and Morality
The goal of Islamic moral values is to govern human behavior within Muslim communities, to encourage and regulate such behavior for the advantage of society as a whole and its members, and to ensure that each person has a happy afterlife. To prepare followers of the Lord, whom Islam explained and made clear the road of virtue for, it seeks to unify human characteristics, behavior, and activities. Therefore, all Islamic moral values of individuals like honesty, tolerance, compassion, love, and soul-fighting, as well as communal ones like self-feeling, duty, and calling for Islam-are intended to promote and safeguard the welfare of both the individual and the community.
According to Islam, morality is the set of virtues and good behavior that a person possesses to uphold societal harmony, foster peace, and defend it against vices like enmity, indecency, lust, and so on. The Qur'an declared Prophet Muhammad (S.A.W.) to have the best manners and he was shown as an example of a decent person to the rest of humanity since he had embraced these virtues to such an extent. In the Glorious Qur'an, Allah states:
You have indeed in the Messenger of Allah a beautiful pattern (of conduct) for anyone whose hope is in Allah and the final Day, and who engages much in the Praise of Allah.28 Islam undoubtedly commands Muslim Ummah to uphold the moral principles of Islam, which are the only thing that bind them together. A Muslim has a duty to treat non-Muslims as well as other Muslims with honesty, tolerance, keeping one's word, generosity, mutual aid, and manliness. A peaceful society must first create and maintain moral principles, which are so crucial that the Prophet Muhammad stated:
I have been sent to bring the moral values to perfection (al-Baihaqi 1994, Hadith No. 20371) 29 As a universal text, the Qur'an speaks to all people, not only Muslims. Its plethora of moral lessons is evidence that it speaks to all people, everywhere, always. The moral principles outlined in the Qur'an apply to nearly every facet of life, including being modest when walking, being truthful when conducting business, being kind and responsible to one's parents, caring for plants and animals, and being a good neighbour by upholding kinship. Above all, must properly care for and keep spouses and children.
Imam Ghazali stated that morality symbolises a kind of high point deeply ingrained in the soul from which different human actions flow naturally, effortlessly, and without the need for prior thought. He says this place is considered morally beneficial if it leads to good deeds in the context of the Shari'ah and common sense. On the other hand, the source of a negative deed is referred to as a disastrous moral source. 30
---
B. Da'wah
Essentially, Da'wah has two dimensions: external and internal. External Da'wah is to invite non-Muslims to Islam and teach them about Islamic beliefs and practices. Internal Da'wah is to teach Muslims about aspects of Islam. 31 Da'wah is a fard kifiya (an obligation that rests upon the community, not the individual), if there are individuals within a community inviting people to Da'wah, then others within the community are relieved of the obligation. If no-one in the community issues the invitation, the sin falls on every individual within that community. 32 A person who performs Da'wah is known as a da'i (persons carrying out Da'wah duty). Although their effectiveness will vary according to their ability, all da'i (those carrying out Da'wah) should be, at the very least, familiar with the basic teachings of Islam. 33 Technical meaning of Da'wah: It has two broad applications in this context: The first is with the meaning of Islam as a religion and the Message sent to Prophet Muhammad (may peace and blessings of Allah be upon him) that is the true call of worship to Allah alone and to be far from polytheism. It is the comprehensive principle for the behavioural act of mankind as well as the establishment of rights and commandments. The second meaning is the extensive spread of Islam and the message of Allah to the people. 34 In the Qur'an, Almighty Allah says instructing the believers and guiding them to the successful way of calling to the Path of Allah:
Invite Ibn Taimiyah sees Da'wah as belief and having trust in Allah, calling to the word of testimony with full identification of good application of the teaching of Islam which includes, consideration of the five compulsory daily Prayers, giving out of Zakat, Fasting the month of Ramadan, Pilgrimage to the Holy House of Allah, as well as to believe in Allah, His Angel, His Books, His Messengers, Day of Resurrection after death, good and bad destiny and to worship Allah as if you are seeing Him (Ibn Taimiyyah nd). 36 Shukri Ahmad Muwaffaq defined Da'wah as motivating people over doing good deed and keeping away from evil attitude, by bringing the people out from the darkness of Kufr to the Light of Islam (Shukri 1988). 37 As a career, Da'wah should be carried out practically and verbally by a knowledgeable and qualified scholar and to be in accordance with the legitimate methods and strategies in line with the circumstances of those to be invited at anytime and anywhere.
---
C. Unmarried Muslim Women and Unethical Behaviours on TikTok
TikTok have become a breeding ground where Muslim youth in Nigeria showcase and displayed immoral acts to the wide world to see. This aspect diagnosed the behaviour of some unmarried Muslim women on TikTok particularly as it relates to Northern Nigeria. An in-depth interview in relation to the views of some concerned social media users that relates the ugly behaviour of some of these women on the TikTok application have been captured below.
To Isa Idris Lukumbogo, who avers that the way he understands TikTok social media communication tool from its inception is a music and video sharing site which has not given room for censoring and for that it allows teens to make 33 Ibid. 34 Arrawi, Muhammad Abdur-Rahman, Ad-Da'awah Islamiyyah D'awatun, 'Alamiyyah, Dar Alqaumiyyah, 1965. 35 Qur'an, Surah An-Nahl:125 36 Ibn Taimiyyah, Abu Al-Abbas Ahmad Alhirni, Mjmoo' Fatawah Shaikh Al-Islam Ibn Taimiyyah, Taqiq bu Abdur-Rahman Bn Muhammad Bn Qasim, Ibn Taimiyya's Library, 2nd Edition, n.d. 37 Shukri, Ahmad Muwaffaq, "Ahl-Al-Fartah wa man fi hukmihim", M.A. Dissertation, Kulliyatu Usul Al-Din, Imam Muhammad Bin Sa'ud Islamic University, Riyad, Dar Ibn Kasir, Beirut, First Edition, 1988. and share immoral messages in the form of videos to the public without regard to religious morals. According to Isa Idris Lukumbogo, the harm associated with TikTok about the morality of Muslim youth is far more than its benefits to the Ummah because before one comes across any Da'wah message by Muslim scholars on TikTok, users who navigate the platform will see a lot of videos which showcase indecency which includes nudity, female dancing and showing what is not supposed to be exposed of their body. 38 This act as asserted by Isa Idris, is also found amongst some Muslim youths who see it as a way of socialising and forgetting their Islamic identity which is against such acts. Most of the Muslim youths that engage in TikTok do not in any way propagate Islam to the public but rather music and other unethical behaviors that do not align with the teachings of Islam. Bullying has also become the order of the day, as some of these youths have no regard for the rules of social engagement. 39 This attitude by Muslim youth on TikTok is a serious setback to Islam and Muslim Ummah in general and for that, there is the need by scholars to enlighten the Ummah particularly the youth on the danger of engaging in such acts of immorality not only on TikTok but other social media platforms. That is why in my opinion, there needs to be TikTok as a social media platform because it promotes nudity, pornography, and other social vices.
Munirat Halilu Abubakar opined that most videos by youths on TikTok that emanate from the Northern part of Nigeria are always about indecent activities that are geared towards the moral bankruptcy of the young generation of the youthful population in the Muslim-dominated society. 40 At times, some of the short videos by either young males or females show a lack of regard for religious teachings by most of the users who utilize it to spread their thoughts through the short video messages they send via TikTok.
Muhammad Maishanu Aliyu evinced that some Muslim women are misusing the TikTok social media application for their selfish interest by doing what is against the teachings of Islam. Most of these women displayed their nudity through videos and selfies which they share on the TikTok thereby accompanying it with immoral statements. Not only that, the activities of some these TikTok users has poses a great danger to Muslim Ummah in Northern Nigeria and some of the menace include among other things; destruction of good Islamic moral and deviation from the good path. 41 Women are advertising and exposing their bodies on TikTok by way of sharing videos of themselves dancing and altering words that are unethical and affect the moral teachings of society.
Babagana Mallam Abatcha harped that the behavior of Muslim youths, including women, on TikTok in Northern Nigeria can vary widely as individuals express themselves differently. However, according to Abacha, there have been concerns globally particularly Muslim communities about content on TikTok platforms conflicting with cultural and religious values. Some potential dangers of TikTok social media site for Muslims in Northern Nigeria could include exposure to content which are been shared and posted mostly by females that go against Islamic teachings, privacy concerns, and the risk of spending excessive time on 38 Isa Idris Lukumbogo, (41 Years), Civil Servant, interviewed at Nassarawa Eggon, Nasarawa State, Nigeria, 28 th November 2023. 39 the platform, which might interfere with religious obligations or other responsibilities. Individuals and communities need to promote responsible and mindful use of TikTok while respecting cultural and religious values. 42 One of the drawbacks to Tiktok, most young adults often use TikTok as they wish, which constantly causes negative videos to appear on this platform. This can be seen as uploaded media content such as photos or videos. The excessive use of TikTok allows everyone to express themselves, even exploiting their bodies for pleasure or just self-existence. However, …… avers that many of these young adults especially females use it to create harmful content; For example, a woman intends to show her body shape through her mini clothes, which causes negative views. This type of attitude leads to selfish behavior among females. Strong immoral are what some of these women propagate on the TikTok site.
The youth today have been abusing the opportunity provided by TikTok as a social media platform to communicate and disseminate vital messages to the world. Ahmad Kassim harped that there are some Muslim youth out there who are bound to destroy the morality of the young general through the utilization of TikTok social media platforms to display all sorts of vices to the wide world. Daily, young Tiktokers are always online with the name of becoming a celebrity or want to have many fans on social media, will resort to posting videos of his/herself either singing and dancing or making indecent utterances that are not supposed to come a true Muslim believers. 43 Some of these Muslim youth consume whatever comes their way from the new media technology without recourse to Islam and what it teaches, and in some cases, most of these youths are being sponsored by the western world help them in their agenda of destroying the Islamic culture and teachings among Muslim youths.
Hajara Usman El-Kasim asserted that it has gotten to a point where some female Muslims are trading themselves like commodities in the market on TikTok with utmost disregard to the moral teachings of Islam and the culture of the people of Northern Nigeria. It has gone to a point where sexy videos and photos are posted on TikTok by Muslims to become celebrities and have plenty of followers and likes on this platform. In Northern Nigeria some unmarried women shamelessly embrace their sexuality, flirt, dance, and lip-synch to songs on TikTok. They disregard rules of gender seclusion, purdah, sexual modesty, and middle-class feminine comportment by uploading such eye-catching TikToks. 44 The researchers asked an online business lady who had a public TikTok account with over 5,000 followers if her parents knew about her TikTok dancing and public videos. The Tiktoker laughed and said, her parents knew nothing about her Tiktok activities, and if they knew that, they would kill her.
It has also been observed by the researchers some unmarried women engage in explicitly sexualized performances, women also involved in covert sexualized performances. For example, a popular trend consisted of women making a series of innocent, yet sexual, facial expressions to different sounds: a startled suggestive look, a shy smile, blowing a kiss, and a petulant expression. Such memes channeled the trope of "sexy schoolgirl" and while appearing to be innocently playful, were also instances of expressive sexuality. Women also 42 Babagana Mallam Abatcha, Lecturer, Ramat Polytechnic Maiduguri, Borno State, Nigeria, interviewed on 13 th December 2023. 43 Ahmad Kassim, (36 Years), Businessman/Social Media User, interviewed at Sokoto, Sokoto State, Nigeria, 10 th November 2023. 44 Hajara Usman El-Kasim, Student, Department of Islamic Studies, Nasarawa State University, Keffi, interviewed on 12 th November 2023.
uploaded "funny" videos and acted out entertaining scenarios and dialogues, which poked fun at societal norms and engaged in crude humor. In such videos, women appeared confident, irreverent, and self-possessed. These bold performances undermined local norms of respectable femininity centered on docility, tradition, and respect for authority. Many TikToks did reproduce heteronormativity by uncritically relaying normative ideas of feminine beauty and heterosexuality. Yet, such sexual and/or "frivolous" TikToks also challenged local gender norms.
According to Abdulrasheed Ishiaku, TikTok as one of the most recently used social media sites has greatly influenced the life of youth most especially women nowadays but its adverse effects can be clearly seen in some of its user's particularly on unmarried women in the north part of the country where morality is regarded with high esteem due to the Islamic religious teachings which have dominated the region. Some unmarried Muslim female uses it to serve as an avenue for publicizing nudity and public abuses, as well as a means of connection with other gender far and near. Moreover, TikTok is time-consuming and a waste of resources for many of its users. 45 Never in human history has our society faced a tragedy of such immense proportions and far-reaching consequences as we do today regarding the unethical moral display by young women on social media particularly on TikTok. Unrestrained social media use has, in a sense, diminished our moral and cultural standards to the point that some things that were formerly considered taboo have become the norm. The tendency is so concerning that dangerous precedents are being formed. 46 Mallam Saudatu Ayuba Sabo also observed that these days, anyone may rise from relative obscurity to the limelight and even become a 'celebrity' by sharing a video or photo of themselves in their nudity on one or more social media sites and encouraging others to like, comment, and share it. Sometimes people will commit such an aberration in the hopes of earning partnerships with corporate entities for brand endorsement.
Rahamat Yahaya observed that part of the major problem with TikTok is that it is primarily focusing on dancing and music, which are both not permissible in Islam. That is why one sees many girls dancing and singing on social media; their videos have been viewed by millions in the world, and those people watching them are not their Mahram. Not only that, most of the females who are involved in these unethical acts on the TikTok social media site do it because of the temporary fame that they have observed other TikTok users to have garnered from the platform. They do not mind if the fame is temporary or not, which is more likely to make a person feel isolated and do all sorts of stuff to gain the attention of people like her page on the platform. 47 The negative impacts of TikTok as social media platform Muslim on youth cannot be over emphasis. This is because Islam have dealt in total all aspects of human life be it social interaction, politics, and economic which also include that of morality. Mustapha Ibrahim Muluku acknowledged that despite the role of parents in instilling better morals for their children, reverse have been the case with some unmarried Muslim females who by all costs are emulating the social lifestyle of the Western world thereby making short videos of themselves either half naked or bullying some through the TikTok. 48 Not only that the advent of TikTok has affected morals and caused a lot of harm to the youthful population around the world. Many times, some of the parents are not even aware that their children are into such illicit acts. This is because most of the youth hide their escapade away from their parents and as such it becomes hard for the parent to understand what they do online. In addition, another deviant behaviour of female TikTok users in the Northern part of Nigeria is an obsession with popularity. This is an encouragement to own which TikTok is a trending today, so they take advantage of it to gain popularity because many TikTok users are famous on social media through the content they create. Some other users are also interested in feeling the same way to get the popularity of TikTok users to do anything to get it. Whereas pursuing popularity is merely worldly a pleasure, which is condemned by religion, as Allah says Suratul Hud: 15-16: Whosoever desires the life of the world and its glitter; to them we shall pay in full (the wages of) their deeds therein, and they will have no diminution therein. They are those for whom there is nothing in the Hereafter but fire; and vain are the deeds they did therein. and of no effect is that which they used to do.49 Humans are commanded to have prepared for life in the hereafter by doing righteous deeds, but humans who are only busy with the pleasures of this world will oppose and deny Allah and His Messenger. The reward for them is none other than Hell because of all their useless actions while in the world, so humans who are only busy chasing popularity in the world are despicable morals where popularity can bring 'ujub or arrogant and riya' (…….). Even though the world is a test for humans and a place to collect good deeds, for those who have ambitions to gain worldly benefits and deny the values and teachings of the Qur'an, all the time spent pursuing the world is a waste and gives a loss. 50 For Like the solutions provided by other TikTok users to not only pursue popularity, it would be nice if TikTok users were not obsessed with popularity, especially since he achieved popularity in a way that is not allowed in Islam.
It can be understood from the response of the Qur'an to the non-deviant behaviour of TikTok users in Northern Nigeria is a behaviour that must be maintained, and the deviant behaviour of TikTok users in Northern Nigeria is a behaviour that must be avoided.
To avoid deviant behaviour to maintain behaviour that is under the values of this source. It can be returned to the Qur'an as a way of life to solve these problems. As the response of the Qur'an to TikTok users in Northern Nigeria, we can see that there is also a solution in it that TikTok users in Northern Nigeria can practice, applying some commendable morals such as practising patience in fighting lust because there are many temptations as interesting content but contains much mud'arat in it. The second solution is that TikTok users should maintain Shame to prevent deviant behaviour in their content. If Shame is maintained properly, then Shame can be a shield for TikTok users from creating deviant content. Not being obsessed with the worldly is the next solution because there are still many TikTok users who are so obsessed with popularity that they produce any content without first considering the behavior displayed in the content. TikTok users can practice this solution to avoid disobedience and slander so that there are no more moral deviations committed by a Muslim who uses TikTok.
---
D. Islamic Da'wah as Solution to the Menace of TikTok in Northern Nigeria
Mustapha Ibrahim Muluku emphasizes the need for Muslim scholars to widen the horizon of enlightening the Ummah most especially during congregational prayers on the harmfulness of the TikTok social media platform and ways to make the best use of the platform and by this, it will go a long way in minimizing the threat it posed to the Muslim Ummah. 51 Abu Mas'ud, (may Allah be pleased with him) said, the Prophet once said, one word of the earlier prophets to people: If you have no shame, you can do what you want.52 It means that if humans do not feel ashamed towards Allah and their fellow human beings, they commit immorality. Conversely, if the shame is still maintained, they will stay away from disobedience.53 Thus, it would be better if TikTok users in Northern Nigeria still have limitations in creating content, maintain a sense of shame in themselves, and are not trapped in the release of lust by other social media users. Islamic Da'wah has the power to transform an individual into accepting and adopting better moral values in society. So, the need for Da'wah to intensify to eradicate and bring back such persons on track for their activities not to affect other members of the Muslim community in Northern Nigeria.
Muslim scholars need to utilize social media for Da'wah activities to enlighten Muslim Ummah on the need to properly make use of social media for social engagement rather than using it for immoral acts that would cause the wrath of Allah upon them. It is also important for Muslim scholars to use these platforms to remind Muslim women that Islam has honored them and for that, they do not need to make videos of themselves dancing on TikTok for the purpose of becoming a celebrity or advertise products to the public just to get money.
Through Da'wah, the menace of immoral behaviour on TikTok could be reduced and through this means Muslim youth particularly females will develop a clear mindset because the higher the level reminder of mankind of his duties to Allah, the higher of development in thinking, the greater the potential to achieve a better level of moral development in everyday life. Although shaping the mindset of youth to morality takes time, and the need for parents also to play a role to guide and checkmate the usage of their children's activities on smartphones. And through this also, most of those who engages in illicit behaviours on this social media platform would reduce and will likely not spread to their younger ones.
Parents have a major role to play in checkmating menace of TikTok among females in Muslim society most especially in their homes. This is because most of the parents are not aware of what their children engage in on social media platforms particularly TikTok which has become a centre where indecent activities are taken place daily by the so-called Muslim females with the name of becoming celebrities. It is, therefore, important for Muslim scholars and parents to intensify efforts in correcting the illness and menace being displayed by Muslim youths most especially females on social media platforms, particularly TikTok.
---
CONCLUSION
This study concludes that computers and smartphone devices are now a daily necessity due to technological advances realm of modern communication. The younger generation of users feels anxious and restless if they are far from their devices. Apart from these communication tools, gadgets will be interconnected with what is called social media, which TikTok falls under. Social media itself has many application features that attract users, one of which is the TikTok application. This application is one that many people like because of the exciting challenges to imitate, both from curiosity and as new content material, so that Users keep opening it. This application is so cherished by young adults, particularly women who do not care about the teachings of the religion, as the app allows dancing and exposure to a lot of immoral activities. In other words, it has been discovered that dancing is part of the phenomenon of TikTok, and Muslim scholars need to question it. In the digital sphere, most of the youth who use social media want self-expression in their accounts, which in turn leads to the breaking of Islamic values among Muslim Ummah if there is no guidance for Muslims using social media (TikTok). Moreover, there are occurrences where women who do not care about morals consider TikTok as an avenue for them to make money or gain fame in society through nude videos on TikTok. |
) How national models of solidarity shaped public support for policy responses to the COVID-crisis in -. | Introduction
How do national models of solidarity shape public support for policy responses to social and economic crises? The COVID-19 pandemic has laid bare the limitations of models of economic governance across the advanced industrial world, including gaps in national systems of social protection, over-reliance on social benefits derived from labormarket relationships, and the effects of decades of underinvestment in educational and vocational-training systems. In the process, it has highlighted trends that long predate it, discrediting the long-held neoliberal nostrum that limited states and an expansive scope for market forces lead inexorably to generalized economic prosperity. It has also shown the need to revisit the question of social solidarities and norms of community and mutual support that inform prevailing conceptions of economic citizenship, as well as expectations of the scope and character of state involvement in the economy. In the process, it has brought renewed attention to the origins and effects of nationally distinctive social-protection institutions, which now more than ever seem essential to the capacity of capitalist economies and their citizenries to adjust to shifting social and economic challenges.
Using COVID-19 pandemic as a signal case, we investigate how the acute uncertainties occasioned by such shocks reshape citizens' capacity for empathy and mutual support, their willingness to sacrifice for the sake of societal welfare, and their support for particular kinds of collective responses that enjoy broad legitimacy and reflect a shared sense of public purpose. In so doing, we draw analogies with other kinds of national trauma, such as wars, which have historically transformed both patterns of social solidarity and support for an expansion of government's role, as with the creation of a comprehensive British welfare state in the aftermath of World War II. We present systematic publicopinion data and tie public views to policy initiatives undertaken by advanced industrial states in order to show how, and the extent to which, public policies have garnered support and how patterns of policy interventions have varied cross-nationally. In the process, we also generate broader insights about how historical episodes that generate massive increases in economic insecurity inform distinctive collective understandings and support particular patterns of economic governance. In so doing, we move beyond prevailing institutionalist and rationalist approaches to investigate the sources of existing institutional and policy frameworks in public opinion and prevailing public discourses related to work, fairness, the economic role of the state, and the meaning of economic citizenship and solidarity. This means treating existing institutional frameworks, not as analytical points of departure, but rather as expressions of underlying public norms and models of solidarity of which both they and the character of policy responses to economic shocks are expressions.
We focus on Germany and the United States, countries with widely divergent modes of integration of capitalist markets, differential levels of state capacity, distinctive systems of social protection, and starkly different institutionalized relationships between capital and labor. Attention to these differences allows us to explain how interactions between social-protection arrangements and related labor-market institutions inform public expectations of government and support for a range of policy responses to COVID-19. Such distinctions between the American and German models, and by extension, liberal and mixed economies more generally, have been analyzed in decades of research, from the comparative-welfare-state literature (Esping-Andersen, 1990) to the well-known distinction between "liberal" and "coordinated market economies" advanced by the "Varietiesof-Capitalism" literature (Hall and Soskice, 2001). However, we go beyond them in analyzing broad patterns of social solidarity and attendant models of economic governance, focusing on the state as a key variable, with particular emphasis on how prevailing conceptions of social obligation, shared by both elites and mass publics, support distinctive patterns of state intervention. We trace American and German policy responses between March 2020 and July 2021, the period during which the key policy responses to the pandemic were crafted, across a number of policy domains, including social protection, financial assistance to firms, tax breaks for individuals and families, and fiscal-stimulus initiatives. We then undertake systematic analysis of public opinion in Germany and the U.S. about such initiatives and broader questions of trust, inequality, and solidarity. This comparative case-study approach furthers our understanding of causal mechanisms at work in these two country contexts, moving beyond the mere observation of "models" to key social and discursive mechanisms that sustain them over time.
We argue that differing conceptions of public purpose and models of solidarity have led to distinctive patterns of public support for both state action in general and policy responses. In both countries, the emotional trauma wrought by the pandemic led to a marked increase in public trust of government and public officials. At the same time, the policies supported by the public varied significantly with levels of economic embeddedness and the degree of institutionalization of economic relationships. In the U.S., where such relationships are much more disembedded and atomized, public discourse reflects a more individualized conception of social organization, and social trust and cohesion have been undermined by partisan and ideological battles, public and elite support has coalesced behind particularistic and palliative benefits aimed at individuals and affected firms. In Germany, by contrast, both the public and elites have favored policy instruments that support strategically important groups, such as skilled labor and firms in export-intensive industries, supported by a more robust conception of social purpose and mutual reliance and aiming proactively to prevent or minimize social and economic dislocation. These patterns of public opinion and institutional configurations both reflect and reinforce distinctive models of social solidarity. In Germany, this model tends to reflect a greater sense of shared public purpose and collective welfare, focused upon the economic fortunes of key groups in the economy, within which a sense of shared identity tends to cohere. In the U.S., by contrast, a much more individualistic conception of deservingness, effort, and responsibility undermines such collective identities, and, with it, support for government initiatives in the service of a sense of shared public purpose. These differential responses and the models of solidarity that underpin them carry with them distinctive sets of life chances for workers, for whom structural economic and power inequalities are both symptoms and reinforcing causes of nationally distinctive social contracts.
In the next section, we develop our theoretical framework, which synthesizes sociological and historical conceptions of capitalism with "moral-economy" understandings of fairness and associated patterns of public opinion. We then present an overview of German and American policy responses to the pandemic, highlighting characteristic differences. Then, we present a second set of empirical data, connecting patterns of public opinion in the two countries to levels of social trust and support for particular policy interventions. We end by exploring the theoretical significance of our findings and speculating about their implications for other episodes of national trauma.
---
From embeddedness to public purpose and solidarity: theoretical underpinnings of responses to economic crisis
The epidemiological and economic shock of the COVID-19 pandemic was equally a social and political one, unsettling conventional wisdoms about the relationship between the state and the market. As such, it presents an opportunity to analyze the relationship between public support for social and economic policies designed to buffer workers, and norms relating to social solidarity and mutual support among citizens. Our theoretical point of departure is that the degree of social cohesion, involving horizontal bonds among citizens, shapes citizens' attitudes toward and trust in the state. In investigating patterns of change in both of these contexts, we shed light on how periods of heightened economic uncertainty and trauma shape the structure and cohesiveness of social bonds and public support for evolving models of economic governance. Thus, we work to connect, theoretically and empirically, patterns of social cohesion and embeddedness to the possibilities for a congruent conception of public purpose between the public and governing elites.
---
Theoretical approaches to systemic reactions to crises
In developing our macro-level theoretical framework, we build upon two distinctive scholarly traditions. The first entails work on the comparative historical sociology of capitalism, exemplified in the work of Polanyi (1957Polanyi ( , 2001)). Polanyi provides a sociological conception of the emergence of capitalism, demonstrating that a "market society" was a deliberate construction of the state constrained by limitations to the commodification of labor. Prior to the beginning of the process of market construction in the 18th century, economic life was informed by the older norms of reciprocity and redistribution, informing such practices as sharing among kinship groups, in contrast to the transactional norms that emerged subsequently (Polanyi, 1957). The implication is that the norms that govern patterns of adjustment to economic disruption are informed by deep structures of human solidarity that legitimate particular patterns of state economic engagement and attendant policy expectations. This idea suggests in turn that differently constituted political economies, with varying historical patterns of economic relationships among groups and between groups and the state, will generate different public expectations and support for policy responses.
Polanyi's emphasis upon the socially embedded character of capitalist economic relations provided a touchstone for critiques of the neoliberal, market-based orthodoxies since the 1970s. Granovetter (1985), for example, brought similar insights to bear on contemporary economic debates over the appropriate and feasible scope of market arrangements in advanced industrial economies. He argues that even highly modern forms of economic life, the level of economic embeddedness "has always been and continues to be more substantial than allowed by economists and formalists" (Granovetter, 1985, p. 483). Economic sociologists locate the foundations of capitalist economies in the social relationships on which market transactions ultimately rely, a view at odds with the transactional and atomized conception of human beings central to classical models. While Polanyi's account is more developmentalist than Granovetter's, they share a key conviction that is central to our approach: that economic and social relations are co-constitutive, and that individuals' capacity to support collective economic endeavors is tied to the extent and character of social embeddedness. In this way, economic activity is understood, not merely as a matter of individual initiative, but also as part of a broader pattern of engagement in which citizens derive support from one another and the state.
The second, related, body of scholarship that informs our analytical framework seeks to historicize and identify mechanisms that govern workers' individual and collective responses to disruptive economic change. The "moral-economy" literature, exemplified in the work of Thompson (1964), grew out of the "New Left" including scholars such as Stuart Hall and Ralph Milliband, who contended that "culture and ideology had become as important as class" (Menand, 2021, p. 49). Thompson (1971, p. 79) argues that "a moral economy . . . suppose[s] definite, and passionately held, notions of the common weal. . . a consistent traditional view of social norms and obligations, of the proper economic functions of several parties within the community".
Such scholarship provides powerful tools for understanding contemporary public and élite reactions to the devastation of the COVID-19 pandemic (for a similar approach, see Koos and Sachweh, 2019). In like fashion, we seek to understand how differing degrees of social embeddedness, and the horizontal tiesboth actual and notional-that constitute them inform public trust in government and support for particular kinds of policy responses. This leads to our first of five theoretical expectations, which establishes a broad theoretical framework for the micro-level propositions described subsequently.
Proposition #1: Individuals are connected to the market in nationally distinctive ways, and differing patterns of social embeddedness generate divergent expectations of the state.
Taken together, these literatures generate different expectations regarding citizens' responses to exogenous shocks such as the COVID-19 pandemic. In contrast to economistic models of atomized individuals, they posit a deeply socially embedded frame, within which individuals act within social contexts and are willing to constrain their individual prerogatives for the sake of collective welfare. Relatedly, they lead one to expect that societies with different constellations of political and social arrangements will respond differently to such shocks, in terms of both citizens' willingness to acknowledge the importance of societal benefit and their expectations of the character and extent of state support.
---
Micro-level approaches to the nexus between politics and public opinion
We now consider the mechanisms that inform individuals' reactions to collective shocks. The sudden onset of COVID-19, and the resulting epistemological and narrative instability across both mass publics and elites, provides potentially fertile ground on which to assess the effects of such shocks on social cohesion, trust and support for social and economic policies. Whereas our knowledge about political and social institutions guides our expectations about what governments might be expected to do, we must turn to publicopinion research to understand how the public reacted in these two countries and how such reactions shaped state responses.
In theory, it is possible to differentiate between the public's reactions to the pandemic itself and to policy responses to deal with it, by asking questions about both the pandemic itself and government measures. In practice, however, this proves to be more difficult, as there is a significant time lag between sudden events and their effects on public polling.
A rich scholarly literature about crises and their effects on public opinion provides guidance about how to understand crises that cannot be easily attributed to broader problems with society, government, or the economy, with much longer gestational periods and time horizons. Whereas, in such instances, citizens' attitudes about crises are shaped by their assessment of the perceived underlying problems (Goerres and Walter, 2016), we focus instead on public-opinion reactions to pandemics and other similar catastrophes, such as wars and natural disasters.
There is robust evidence for a unifying effect of external shocks in support of the executive and incumbent governments and administrations. Such "rally" effects can be seen after military conflicts, assuming the presence of some media attention. The micro-level mechanism is that those who are ambivalent about the executive tend to increase support for government (Baker and Oneal, 2001;Baum, 2002). Elite criticism of government immediately after the onset of a crisis is often less prevalent in the media (Groeling and Baum, 2008), and citizens perceive a stronger elite consensus in such contexts and adjust their attitudes accordingly.
The individual reactions that lead to such effects are driven by powerful emotions. Threats trigger anxiety and the desire for security, which citizens often seek from public officials and institutions (Pierce, 2021). Some psychological theories emphasize humans' yearning for a world that is predictable and secure (Lambert et al., 2011). Although military conflicts are the moststudied trigger of a rally-around-the-flag effect, similar effects can be observed in the affirmation of in-group memberships in response to perceptions of threats from outgroups. In other words, perceptions of threats and insecurity tend to generate expectations of both the state and fellow citizens, and the effects of such dynamics extend beyond policies to social behavior more generally. Altruism with respect to one's in-group seems closely tied to conflict and catastrophe (Bowles, 2008). That said, one should expect different kinds of public reactions and different levels of support for policies that reflect and reinforce distinctive conceptions of social solidarity.
Proposition #2: The onset of pandemics will increase support for incumbents and political trust in government in the short run.
Wars have been shown to create prosocial behavior at the individual level (Bauer et al., 2016) and to encourage burdensharing and institution-building at the collective level (Obinger and Petersen, 2017). The collective experience of hardship during war seems to lead to a logic of "we share the burden, we share resources" (see Titmuss, 2019, ch. 4). COVID-19 was not a war, but it had some characteristics that remind us of wartime experiences.
---
For example, COVID-19 was potentially deadly for millions with
As a result, we are constrained by the timing of polling e orts and their relationship to government actions, such as executive orders introducing physical distancing measures.
unknown consequences for citizens' long-term health. Increases in prosocial behavior was a plausible expectation, as the economic devastation wrought by the pandemic far exceeded the coping capacity of individuals or even significant social groups. The traumatic experience of COVID-19 might also lead to altered preferences and thus a higher level of prosociality reflected in social trust.
Proposition #3: Social trust will increase in both countries in the short run.
As with wars, in this scenario, citizens might support major government interventions related to health policy, as in the case of economic policy (Mizrahi et al., 2020). It would thus be plausible to expect citizens to grow accustomed to a more active role for government and for this effect to be more visible in the U.S., where baseline state capacity is weaker than in Germany. However, it remains unclear whether this shift in attitudes would be related primarily to the scope of government or, rather, to the intensity of its activities.
Proposition #4: With respect to social and economic policy, path-dependent cross-national policy divergence will develop, with increased support for highly individualized provisions in the U.S., in contrast to support for more collectively-or group-oriented policies in Germany.
Given the magnitude of the pandemic's shock to the two societies, it is reasonable to expect significant change in the policy priorities within their populations. However, the two countries differ significantly in the organization of their healthcare systems, a fact that would be consistent with different sets of expectations. In Germany, coverage of health insurance is quasi-universal, with funding burdens and managerial tasks shared between worker and employer representatives. The publichospital system is robust, and the public-health infrastructure is well developed. In the United States, by contrast, despite the expansion of coverage resulting from the Affordable Care Act, coverage is spotty and incomplete and benefit terms are much less generous. Public hospitals are also fragmented and uneven in coverage, and public-health infrastructure is underdeveloped and underfunded, with widely varying capacities across states.
The pandemic should increase public concerns about unemployment, though the character of the concern and respective points of emphasis are unclear a priori. In Germany, normal unemployment insurance pays up to 67% of a worker's previous wage, with benefit duration scaled by age and time of employment but typically lasting at least a year. Thereafter, the less-generous "Hartz IV" benefit kicks in Vail (2010). In the United States, by contrast, unemployment insurance is limited to a few hundred dollars per week, varying significantly by state in generosity and terms of eligibility. Whereas German workers and employers view unemployment insurance as a benefit paid for through contributions over time, in the U.S. the benefit is heavily stigmatized and is contingent upon often-onerous job-search, reporting, and monitoring requirements (Herd and Moynihan, 2018). These policy and institutional differences are both possible drivers of distinct patterns of state intervention and historical artifacts of deeply rooted differences in public conceptions of politics and social organization.
The economic dislocation resulting from the pandemic leads one to expect social inequality to become more prominent in people's minds, though in ways shaped by these policy and institutional differences. The pandemic was much more difficult for people with fewer assets, who could not work remotely, and who were responsible for caring for children or dependent adults. Under such circumstances, it is reasonable to expect an increase in the salience of economic inequality, but it is less clear a priori how citizens socialized in these two systems would interpret and respond to it. Traditionally, the American public is much more tolerant of social inequality. Bénabou and Tirole (2006) relate this discrepancy to the prevalence of the belief that individuals and their children can succeed economically. It is thus reasonable to expect that the salience of social inequality would rise in both countries, but that the demand for government action will be limited in the U.S., as fewer citizens view government as a legitimate remedy to social problems. This reasoning leads us to our fifth and final proposition:
Proposition #5: Support for policy measures to reduce inequality will increase in Germany, but not in the U.S.
---
Research design and data
We look at two country-crisis episodes: the United States in 2019-2021 and Germany in 2019-2021. We concentrate on policy responses at the national level as a first set of reactions and on public opinion as a second. We also consider variation within that period over time. In each country, the challenges related to health, the labor market, and the economy were similar, but the reactions were quite different. We are thus echoing other comparative approaches to moral economies in which the two countries are often selected as representative of different welfare regimes (Sachweh and Olafsdottir, 2012) (see Supplementary Appendix A.1).
We explore the extent to which similar challenges were channeled differently in the two countries. Causally, we employ the logic of within-case designs, assuming that Germany in February 2020 is similar to Germany in January 2020, with the obvious difference that the first local infections of COVID-19 were discovered in February. The exogenous origin of the pandemic leads to the plausible assumption that a temporal change between January 2020 and subsequent periods can be attributed to COVID-19. However, we must be careful not to discount the difference between reactions to the pandemic and reactions to behavior by political actors. Thus, for instance, a rise in political trust after the onset of the pandemic might be a function of fears' being more prevalent in the population or, instead, of an appreciation of adopted policy measures.
Public opinion data bear out this contention. In a recent Pew survey, fewer than four in ten Americans surveyed believed that addressing inequality should be a top priority of government, In contrast, in an OECD poll, more than half of Germans surveyed strongly believed that inequality was too great, well above the OECD average and in increasing shares over time since .
---
See Mitchell (
) and OECD ( ).
We have collected extensive data, building on various efforts by other scholars (e.g., Bruegel, 2020;McCollum, 2020;Matthews, 2021) and on secondary usage of existing analyses. We also make use of ten commercial and scientific public opinion data sources, some of them with different surveys. All data are accessible to the public (Edelman Trust Barometer, More in Common, Politbarometer, Freiburger Politikpanel, Pew International), scientists (GESIS internet panel) or through available commercial databases (Kaiser Family Foundation, Gallup). The publicopinion data differ slightly in their sampling procedure (some use random sampling, some quota sampling, others convenience sampling) and their survey mode (phone, face-to-face, online) (see Supplementary Appendix A.2).
We use different indicators of public opinion to assess the relationship between citizens and the state and among citizens. We examine confidence in national government, trust in national government, support for the incumbent, social trust, and attitudes toward specific policies. All of these variables capture slightly different aspects of citizens' connections to the state or to other individuals. The objective of this triangulating kaleidoscope of public-opinion pieces is to paint a broad picture about changes attributable to the pandemic and attitudes toward policies adopted to fight it.
We can trace changes over time of some measures of public opinion and static snapshots of others. Given the observational nature of our data, we cannot distinguish whether changes over time were already anticipated by policy-makers when implementing these public policies.
---
Empirical analysis: public-policy responses to COVID-in Germany and the United States
Like many advanced industrial countries, Germany and the United States rapidly deployed vast fiscal and administrative resources following the advent of COVID-19 in March 2020 and continued economic support into the summer of 2021. These measures included loan guarantees and payroll subsidies to businesses, investments in public infrastructure, and direct assistance (Figure 1).
In Germany, the scale of policy responses echoed that following reunification in 1990, when more than e2 trillion was spent over three decades (Vail, 2018). In the U.S., the response, involved a major deployment of state power and resources unmatched since the Great Depression, shifting the prevailing policy-making paradigm away from the small-government and neoliberal orthodoxies that even the post-2008 Great Recession had been unable to displace (Alter, 2021;Carter, 2021).
Both countries' fiscal-and economic-policy responses were breathtakingly ambitious. Including all discretionary spending until March 2021, the U.S. spent more than any other country (at 27.1% of GDP), while Germany's, at 20.3% of GDP, was seventh largest in the world (Matthews, 2021) (see Table 1).
---
FIGURE
Major events in the COVID-pandemic and policy responses.
Pre-empting dislocation in Germany: subsidizing business, supporting core workers and families, and bolstering public investment Germany's economic-policy strategy in the wake of COVID-19 involved a combination of generous support for public-health initiatives and an extension of both direct and indirect support to core constituencies of the Social Market Economy, including industrial firms, small businesses, workers in key industries, and families. In late March 2020, Merkel's government announced two major initiatives to support economic activity and buffer disproportionately affected groups. The first, the so-called Corona-Schutzschild für Deutschland (Coronavirus Protective Shield for Germany), allocated e353.3 billion, including e3.5 billion for personal protective equipment (PPE) for hospitals and investments in vaccine development; e55 billion to remedy hospitals' and doctors' deteriorating finances and to provide support to families, including subsidies for lost earnings and extended access to family allowances; and e50 billion in socalled Soforthilfe (Immediate Assistance) for small businesses, freelancers, and the self-employed (Bundesfinanzministerium, 2021a). The second initiative, the Wirtschaftsstabilisierungsfonds (Economic Stabilization Fund), earmarked e891.7 billion for larger firms, particularly those with strategic economic importance. This measure included e400 billion in loan guarantees, e100 million for an assistance program for firms within the Kreditanstalt für Wiederaufbau (KfW) (a public development bank), and tax breaks and abatements to help firms clean up their balance sheets (Bundesfinanzministerium, 2021a). The government also provided extensive support for workers, particularly those in manufacturing and key export sectors. The signal initiative in this category involved the so-called Kurzarbeitergeld, or "Short-time Work Program." Originally created in the aftermath of German reunification and resuscitated in the wake of the Great Recession, these schemes allow at-risk workers to work reduced hours while receiving up to 90% of their previous pay, so as to avoid disruptive and costly layoffs. Between March and December 2020 alone, an additional e23.5 billion was spent on related programs (Bruegel, 2020). In a supplementary budget of e122.5 billion adopted in the same month, the government extended other forms of support to German workers, including an additional e7.7 billion for the second-tier assistance program for the unemployed.
In June 2020, the government adopted a second major stimulus package worth e130 billion that focused on tax relief to German firms and consumers and additional resources for families with children, long a core constituency of the post-war Social Market Economy. The two VAT rates were cut from 19 to 16% for the standard rate and 7 to 5% for necessities, such as groceries. The initiative also provided an additional e300per-child bonus payment to families and more than doubled the income-tax exemption for single parents to e4,000. The package also extended a number of tax breaks, including subsidies for municipalities suffering from declining tax revenue and a 40% cap on social-security contributions. For firms, it increased depreciation allowances and created more generous provisions for declaring losses from previous tax years. Finally, the measure made significant investments in renewable energy and infrastructure, including investments in electric vehicles, battery-technology development, and the modernization of Germany's aging fleets of buses and commercial vehicles. In October, an additional e15 billion was provided for grants to companies, and in March 2021 an additional e150 per child was paid to families (Bundesfinanzministerium, 2021b). Although the EU invested significant funds in vaccine development and distribution, both public-health regulations and investments in economic-adjustment funding was undertaken largely on the national level. Although some of the funding [e.g., the Recovery and Resilience Facility (RRF)] derived from EU sources, the allocation of the spending was largely an affair of the member states.
Taken together, these national-level initiatives provided urgently needed support for both investment and consumption and represented remarkably open-ended commitments for a country normally associated with fiscal probity. At the same time, they reflected significant continuity of policy orientation, with a focus on key social and economic groups, and a more socialized conception of welfare, pre-emptively intervening to avoid social and economic damage rather than mitigating it after the fact. Though Germany's more robust network of automatic stabilizers, such as unemployment insurance, might help to explain the fact that officials favored a more targeted approach, with the general population benefitting from pre-existing, more general benefits, the reduction in generosity of such benefits with the so-called "Hartz IV" reforms in the early 2000s has constrained such support (Vail, 2010). Accordingly, the disproportionate support afforded to economically important groups during the pandemic suggests the durability of established insider-outsider cleavages that have long characterized the German export-led growth model. To the extent that such automatic stabilizers are operative, they provide support that is different in scope and scale from the robust, targeted measures that constituted the core of the German response.
Repairing damage in the United States: investing in public health and a targeted, short-term expansion of the safety net
In early 2020, Congress, in rare bipartisan fashion, responded aggressively to the pandemic, passing three distinct but related measures. The first, the "Coronavirus Preparedness and Response Supplemental Appropriations Act, " devoted a modest $8.3 billion to support public health, dedicated funds to vaccine research, funded broad public-health initiatives on the federal, state, and local levels, and purchased personal protective equipment for medical professionals (Breuninger, 2020).
The second package, the so-called "Families First Coronavirus Response Act, " focused on the pandemic's economic effects on individuals. It devoted a total of $192 billion to paid sick and medical leave for certain categories of businesses and significant subsidies to the Supplemental Nutrition Assistance Program (SNAP, colloquially known as "Food Stamps"), temporarily increased the generosity of Medicaid and Medicare (the federal health-insurance programs for the poor and elderly), and subsidized existing unemployment-insurance benefits (Committee for a Responsible Federal Budget, 2020).
The third, and much more extensive measure, dubbed the "Coronavirus Aid, Relief, and Economic Security Act" (CARES), represented the most extensive crisis-related package since the New Deal. Costing $2.2 trillion, the measure involved four distinctive areas of assistance. The first entailed an increase in the generosity of unemployment insurance, providing an additional $600 per week and extending the length of eligibility. A second measure offered one-time relief payments of up to $1,200 per adult and $500 per child. The third extended support to businesses directly affected by the collapse in demand, including $350 billion for forgivable loans to small and medium-sized enterprises and $58 billion for airlines, which had seen air traffic decline by about 60% (Slotnick, 2020). The fourth measure included funds for overwhelmed hospitals; additional funds for vaccine development, veterans' health, and the Centers for Disease Control and Prevention; and money for medical equipment and community health centers. In April, Congress passed an additional program, the Paycheck Protection Program, which provided loans, forgivable under certain circumstances, to firms in exchange for their commitment to keep workers on their payroll. In addition, the law created a new type of unemployment assistance, as opposed to insurance, which extended eligibility to previously ineligible people, including those who had exhausted their state-level benefits, those who quit their jobs to care for ill family members, and the self-employed whose incomes were affected by the pandemic (Stone, 2020).
Following President Biden's electoral victory in November 2020, Congress adopted an $900 billion package that focused on extending existing programs to support affected households and businesses. The measure provided additional income-contingent stimulus payments of $600 per person, additional unemployment payments of $300 per week, childcare and nutrition assistance for the poorest Americans, and emergency assistance to renters. On the business side, it provided an additional $248 billion for the Paycheck Protection Program and funding for colleges and universities and the entertainment industry. It also devoted modest resources to infrastructure initiatives, including money to expand broadband internet access for families whose children were being educated at home, and $45 billion for airlines, highway repairs, and public transportation (Siegel et al., 2021).
Following two surprising Democratic victories in the Georgia Senate runoffs, which gave Democrats unified political control, Congress enacted the $1.9 trillion American Rescue Plan in March 2021, with no Republican support. This package was unprecedented in scope, with large extensions of previous measures as well as an array of new initiatives, including one-time, incomecontingent payments of $1,400 for each adult and child and an extension of additional federal payments for unemployment insurance. Breaking with historical patterns of federal support for children, which had traditionally been provided through nonrefundable tax credits, the measure introduced 6 months of direct family allowances, scaled by family income and refundable beyond a family's tax liability. This paradigm-shifting initiative, which like most of the package was fiercely opposed by Republicans, represented an unprecedented assumption of federal responsibility for supporting children. The package also extended $350 billion to state and local government and money for educational
The contrast between the targeted, time-delimited nature of this program and its reliance on loans, and the much more generous and open-ended Kurzarbeitergeld program in Germany is typical.
The recent excision of this measure from the Inflation Reduction Act, passed in August , shows how precarious this shift was.
institutions, restaurants, early-childhood development programs, vaccine distribution, public transportation, and infrastructure, as well as more than a half billion dollars for the Federal Emergency Management Association's Emergency Food and Shelter Program. Focusing overwhelmingly on directly affected individuals and business and low-income families, the measure was much more targeted than its German counterpart and reflected a more individualistic and fragmented model of solidarity. This difference is consistent with broader policy patterns in the two countries' welfare states. Germany's neocorporatist logic and administration yield contributory policies jointly managed by workers and employers (or other relevant actors, such as doctors' associations in health insurance) across a wide range of policy areas, from unemployment insurance, to pensions, to health care. In the U.S., with the sole exceptions of Social Security and Medicare, by contrast, policies tend to focus on individual behaviors and often impose onerous work and job-search requirements for eligibility, though this varies significantly by state. Salient examples include Temporary Assistance for Needy Families (TANF) and many stateadministered unemployment-insurance schemes.
---
Empirical analysis: public opinion reactions
In view of these divergent policy responses, we now turn to the mechanisms, located in individuals' views and priorities, that lie behind such patterns. We start by examining potential rally-tothe-flag effects. Gallup runs a long-established global series asking for confidence in one's government (Table 2).
Both countries reveal a clear jump in aggregate confidence in government between 2019 and 2020. The German public's confidence rose by 8% points, from 57 to 65%. The American public's confidence rose by 10% points from 36 to 46%. What is more, the high levels of confidence in both countries are exceptional in the long run, dating back to 2006 in this series. The secondhighest confidence level in Germany was 63% in 2015. In the U.S., only 2006 and 2009 witnessed higher levels at 56% and 50%. The change from 2020 to 2021 is downward again in both countries with 5-6 points. This is what we would expect if an emotionally driven rally to the flag effect were in place. This outcome is consistent with Proposition #2 above, relating to expectations of increased trust in government, though it also provides reason to expect distinctive patterns of support cross-nationally. For a comparable indicator (trust in government, Table 3), we see a similar picture, a sizeable jump in trust in government by 19 points in Germany and nine points in the U.S. between 2019 and 2020, followed by a decline in 2021 that we have already seen for confidence in national government. This increase in political trust after the onset of the pandemic has been demonstrated for other contexts in Western Europe (Bol et al., 2021;Esaiasson et al., 2021;Oude Groeniger et al., 2021;Schraff, 2021).
In both countries, there was an increase in approval of the leadership of the respective country (Table 4), which was much smaller in the United States (+5%) than in Germany (+14%). These trends reflected the different levels of general support or approval of national governments (higher in Germany than in the U.S.). The relative changes were upward in both countries between 2019 and 2020. After 2020, the United States experienced a change in leadership from Trump to Biden. German approval rates decreased by nine points whereas the U.S. public had a small increase in approval by two points. Despite the many institutional differences, as expected in Proposition #2, we see similar indications of deeply human, emotional reactions: human nature, and the associated need for assurance and safety, dominates over institutionally embedded learning experiences.
We also find some evidence of an increase in social trust (Proposition #3, Table 5). In 2020, the levels of social trust were indistinguishable in the two countries, at 58%-59%. In 2014, however, the levels were much lower, at 42 and 38%, respectively. In 2017/2018, the German estimates were unchanged, whereas in the U.S., they had declined to 32%. Thus, available evidence shows higher levels of social trust in 2020 than in earlier years, but with similar levels at the height of the crisis. This outcome supports our proposition that countries with higher degrees of Another indicator by Gallup World Poll reveals a similar picture with % of Germany in and % in trusting their national government some or a lot, compared to % and % in the USA. social embeddedness will experience greater social trust in the presence of social and economic dislocation.
With respect to attitudes toward specific economic and social policies in Germany, there is strong evidence for support for statusmaintaining policies and investments in preserving and protecting key groups in the labor market. According to our estimates from May/June 2020 in Table 6, four measures that directly protect jobs had approval rates of 50% and more. Per-head payments for the public, by contrast, were only supported by 21% of the populace.
In another survey conducted at the same time (Politbarometer), one-off payments for families with children also found a majority of 57%. The absence of any questions about healthcare reflects the fact that the near-universal, socialized healthcare system is uncontroversial in Germany.
With respect to American public-opinion data regarding specific policies (Table 7), health policy was the most salient policy area before the crisis and remained so afterwards. Various surveys (Kaiser Family Foundation) reveal that this was the most important issue for the American public (as to their voting intentions): 89% in February 2020 just before the pandemic, 85% in May, 87% in September, and 91% in October 2020. In other words, the electoral salience of health policy was not really affected by the pandemic in the U.S. because it had been so salient before. It thus comes as no surprise that health-policy changes suggested to fight the pandemic found broad majorities, as well.
Finally, we consider public views on inequality. In Germany, there is evidence for increased support for a wealth tax for people with at least e500,000 in assets to combat the economic consequences of the pandemic. In May 2020, 51% supported such a measure, compared to 56% in November 2020 and 58% in February 2021 (Freiburger Politikpanel). This increase in support, however, is not mirrored in support for an increase in the solidarity contribution levied since the 1990s in order to promote socioeconomic equality between East and West (a mere 17%, up from 15%). Among Americans, 67% supported a universal basic income for the course of the pandemic in July 2020 (Kaiser), but there is no evidence for robust support for sustained measures to combat inequality. Both countries show high levels of concern about division. Though this is not the same as concern about inequality, after the COVID crisis, 67% of Germans and 89% of Americans were worried about greater division in society (More in Common Survey).
Although state-level implementational differences in publichealth measures such as mask mandates varied in the United States, there is no evidence that such differences exerted any systemic effects on national-level support for public-spending initiatives designed to buffer citizens from the economic effects of the pandemic. In Germany, such regional differences were more muted, with variations stemming largely from regional and timedelimited differences in the severity of the outbreak and, to a lesser extent, ideological differences among Länder governments (Behnke and Person, 2022). In general terms, Länder governments sought to achieve consensus and to limit cross-regional variations in the implementation of federal-level mandates. We therefore conclude that both states' federal structures exerted limited and non-systemic effects upon public policy and patterns of public support.
Although some recent policy initiatives by the Biden administration might lead one to conclude that the American public has become more broadly supportive of government intervention, we believe that caution is warranted on this score.
To be sure, the passage of the 2022 Inflation Reduction Act (IRA), which embarks on a set of serious industrial-policy initiatives related mostly to combatting climate change, might lead one to conclude that support for robust government intervention had durably increased. However, given widespread public unfamiliarity with the act's provisions, broad public support for the legislation should be taken with a grain of salt. Even broad support for some of the act's main provisions, such as tax credits for investments in clean energy infrastructure, public support for the act (about 74% among likely voters at the time of its passage) (Data for Progress, 2022) must be set against Americans' historical preference for tax credits, which are easier to sell politically, than direct fiscal outlays, which they distrust. These factors suggest that this departure from past trends is fragile and unlikely to be reproduced across other contexts. Indeed, the recent abandonment of the refundable and more generous child tax credit, promulgated as part of post-COVID stimulus measures, would seem to justify such skepticism.
In sum, we find support for positive reactions associated with a rally-to-the-flag effect in both countries, indicated in increases in confidence and trust in government and approval of the national leaders. For prosociality, social trust shows an increase in both countries for 2020 compared to earlier years. The policy-specific reactions are surprisingly predictable given the intensity and breadth of the pandemic in its consequences. Citizens seem to remain relatively consistent in how they want governments to react. The German public seems to be supportive, as we would expect, of aggressive policies to combat inequality whereas in the U.S., we only see evidence for time-delimited measures that would last only for the duration of the crisis. In addition, post-COVID patterns of policy making in both countries, with the partial exception of the IRA in the U.S., have continued to hew to established paradigms; the German reliance on the Kurzarbeitergeld program and the American abandonment of the refundable child tax credit serve as illustrative examples. To be sure, with these surveys, we cannot be sure how citizens would have responded to other survey questions (preferably exactly the same ones in both countries). With that caveat, we argue that these outcomes are consistent with our propositions relating to both common effects across countries and cross-national variation in public support for particular sets of policy responses.
---
Conclusion: social embeddedness, public opinion, and public policy: the lessons of COVID-
We have traced German and American policy responses to COVID-19 and explained them as the products of differing patterns of embeddedness and associated models of solidarity, using a systematic investigation of shifts in public opinion in the two countries, with particular emphasis upon levels of public trust in government and support for varying policy initiatives. The It is worth noting that the act also contained a widely popular provision enabling Medicare to negotiate drug prices with pharmaceutical companies, which no doubt bolstered support for the law overall.
Frontiers in Political Science frontiersin.org trauma of the pandemic led to highly emotionally charged public responses, with significant increases in public trust in and reliance upon government in both countries. This development reflects a significant rally-around-the-flag effect, a reaction that was perhaps surprising in the United States, given the deep currents of public distrust of government that have prevailed there since the 1980s. That said, the character of shifts in public opinion differed markedly in the two countries. In the United States, where economic relationships are much more disembedded and the moral economy more fragmented and individualistic, the public disproportionately supported individualized benefits and assistance to individuals and firms directly affected by the pandemic. In Germany, by contrast, where economic and labor-markets are more deeply embedded and where workers and employers have traditionally shared a common, if sometimes contested, sense of public purpose, surveys reflected support for investments in existing collectivized labor-market institutions, rather than merely to repair economic damage after the fact. These distinctive arrangements and patterns of social embeddedness, we argue, help to explain the fact that the measures supported by German citizens tended to be more solidaristic and institutionalized, with the Kurzarbeitergeld and state support for new hires serving as a key example. If the American response reflected a logic of posthoc, palliative care, then, its German counterpart reflected one of preventative medicine combined with systemic investment in established social and economic relationships.
Although the full scale of the effects of the pandemic will take several years to reveal themselves, our research suggests several important implications relating to the effects of cataclysmic shocks, such as pandemics, natural disasters, and wars. First, despite widely varying baseline levels of support for government cross-nationally, such events tend to bolster public support for the collective responses that only states can provide. Second, the kinds of policy interventions supported by citizens may well parallel, and perhaps even reinforce, pre-existing levels of social embeddedness in the economy, with patterns of groupbased solidarities acting as both outgrowths and reinforcements of existing institutions and established political and social practices. In this context, distinctive national moral economies and social and economic institutions are tightly linked, with exogenous shocks revealing underlying shared moral and conceptual frameworks that are not reducible to simple institutional dynamics, but rather reflect deeply embedded understandings of the imperatives of remedying structural inequalities and differential access to economic resources. These findings are consistent with those presented in other recent work, including (Béland et al., 2021).
Finally, in a more speculative vein, we suggest that the ways in which such underlying normative structures mediate between catastrophes and both public attitudes and social and economic arrangements may take years to unfold, much as the Black Death in the 14th century began to erode feudalism in ways that were far from obvious at the time. Such long-term This article cited is one contribution to a special issue of Social Policy and Administration, several of the articles of which deal with crossnational and cross-regional variation in policy responses to COVID-.
consequences could also be highly regionalized and pan out differently across large polities, especially when they are federal states. In future research, we hope to exploit the increasing availability of longitudinal public-opinion data related to COVID-19 to arrive at more systematic conclusions about the relationship between catastrophes and public attitudes, in an era in which such catastrophes-ranging from pandemics to natural disasters whose severity and frequency are increasing in the wake of humanengendered climate change-are becoming both more common and more severe.
their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
---
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
---
Funding
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. The libraries of Tulane University, Wake Forest University, and the University of Duisburg Essen provided research support, and the latter generously funded the access to the Roper Archive and Gallup Analytics. AG would like to thank the European Research Council for funding this paper including publication costs through the Consolidator Grant project POLITSOLID "Political Solidarities" (# 864818) accessible at https://bit.ly/ politsolid.
---
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
---
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of
---
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpos.2024. 1273824/full#supplementary-material |
Algorithmic fairness (AF) has been framed as a newly emerging technology that mitigates systemic discrimination in automated decision-making, providing opportunities to improve fairness in information systems (IS). However, based on a state-of-the-art literature review, we argue that fairness is an inherently social concept and that technologies for AF should therefore be approached through a sociotechnical lens. We advance the discourse on AF as a sociotechnical phenomenon. Our research objective is to embed AF in the sociotechnical view of IS. Specifically, we elaborate on why outcomes of a system that uses algorithmic means to assure fairness depend on mutual influences between technical and social structures. This perspective can generate new insights that integrate knowledge from both technical fields and social studies. Further, it spurs new directions for IS debates. We contribute as follows: First, we problematize fundamental assumptions in the current discourse on AF based on a systematic analysis of 310 articles. Second, we respond to these assumptions by theorizing AF as a sociotechnical construct. Third, we propose directions for IS researchers to enhance their impacts by pursuing a unique understanding of sociotechnical AF. We call for and undertake a holistic approach to AF. A sociotechnical perspective on AF can yield holistic solutions to systemic biases and discrimination. | Introduction
Biases in automated decision-making have negative implications for different stakeholders: (1) individuals and groups, who are at risk of discrimination and systematically inferior outcomes, (2) organizations, which may receive bad publicity and/or may suffer from legal consequences given that systematic discrimination is often penalized by law (White & Case, 2017), and (3) societies, which risk inflexible social stratification and political riots by those who have been discriminated against. Thus, it is critical to ensure fair decisions -especially those taken or supported by algorithms.
An unfair decision is often hard to detect and repair. This is particularly challenging when the decision is based on machine learning (ML). ML cannot provide meaningful explanations of how a decision was made. Thus, in case of errors, easy corrections are not possible (Mitchell, 2019). Static and rules-based algorithms can also generate biases, which remain obscured unless outcomes are systematically reviewed.
Since algorithms are chosen based on their performance on the task at hand (e.g., predicting a risk) rather than on fairness, undesired effects often go unnoticed. Recent advances in algorithmic fairness (AF) claim to provide a remedy.
Algorithmic fairness is used to refer to technological solutions that prevent systematic harm (or benefits) to different subgroups in automated decision-making (Barocas & Selbst, 2016). From a technical perspective, AF seeks to mathematically quantify bias and, based on this metric, to mitigate discrimination in ML against subgroups. In recent years, several operationalizations and applications of AF have emerged in the information systems (IS) research (Ebrahimi & Hassanein, 2019;Feuerriegel et al., 2020;Haas, 2019;Martin, 2019;Rhue, 2019;van den Broek et al., 2019;Wang et al., 2019). These publications have often viewed unfairness in decisionmaking systems as a technical issue and have sought technical remedies. Thus, they have followed most other studies, which view AF as a technical discipline. However, unfairness in algorithmic decision-making is not solely a technical phenomenon: it has societal, organizational, and technical sources, is reinforced by both social and technical structures, and -as we argue -should be approached from a sociotechnical perspective. 1 The sociotechnical perspective acknowledges that a system's outcomes depend on mutual influences between technical and social structures, as well as between instrumental and humanist values (Sarker et al., 2019). Decision-making does not happen in a purely technical context: persons are subjects and objects of decisions, or experience higher-level consequences thereof. However, a purely social framing may also be inappropriate if it does not consider how the proposed social solutions fit the multiple algorithms in decision-making processes. Algorithms not only support persons in taking a decision, but may also delude them, may trick them into a decision, or may simply be used as an excuse when a decision becomes unpopular (O'Neil, 2016). Thus, the social and technical components of decision-making become intertwined in many ways, demanding a broader, ecosystem-based perspective (Stahl, 2021). The sociotechnical perspective bears the potential to yield holistic solutions to the unfairness that emerges in the state-of-the-art decision-making configurations.
An effective approach to AF requires coordination and balance among technical innovation, political/legal actions, and social awareness. However, we do not yet have a unifying perspective that integrates technological and social efforts in the context of AF.
Researchers across disciplines have proposed various solutions, focusing on specific aspects of AF, but have lacked a comprehensive, overarching framework to ensure coherence among the approaches. For instance, the political decision that forbids the collection of sensitive attributes (e.g., gender or ethnicity) does not align well with the 1 We develop AF's meaning from a purely technological notion, as has been dominant to date, to a sociotechnical notion, positioning AF as a phenomenon. According to the former, AF comprises technological means (e.g., additional criteria implemented in the algorithm) that prevent systematic bias in an algorithm's output. In our proposed sociotechnical notion, AF describes the use of algorithmic as well as organizational or processual ways to assure that the application and output of whole decision processes that involve algorithms does not produce systematic discrimination and injustice. We use fairness to refer to the notion or construct being used to assess the quality of a decision or a state. algorithmic solutions that use exactly these attributes to assure that no group is systematically discriminated against. Thus, organizations have rarely considered AF to be relevant (Mulligan et al., 2019), because the proposed technological solutions are considered incomplete and thus not practically applicable. Motivated by this void, we explore the potential of adopting a sociotechnical perspective to understand the origins of algorithmic unfairness and to study its impacts in IS practice. Specifically, we contribute by (1) interrogating the premises that underlie purely technical and social perspectives,
(2) discussing how a sociotechnical perspective yields new insights about AF's complex nature, and (3) showing how IS research can take a leading role in creating and implementing holistic AF solutions.
Our research objective is to embed AF in the sociotechnical view of IS. We address it according to the steps proposed by Alvesson and Sandberg (2011). First, we review the technical literature on AF, identifying underlying assumptions of current AF practice. We then investigate the premises behind positions that criticize current AF approaches as groundless or inadequate. Nonetheless, we acknowledge that fairness has emerged as an important social construct that may be compromised by a purely technical perspective. Simultaneously, we accept that technology will be involved in high-stakes decisions and fairness considerations. We regard AF as an inherently sociotechnical construct and elaborate on its roles in sociotechnical systems (Sarker et al., 2019).
Further, we map the existing body of AF literature onto a sociotechnical perspective, identifying the points at which algorithmic (un)fairness can arise in sociotechnical systems, and then formulate research directions for IS: they show how a sociotechnical perspective can help address challenges of AF. Overall, we interrogate the fundamental assumptions that underlie current AF research and propose future research directions.
The remainder of this paper is structured as follows. In Section 2, we provide a background on the technical notion of AF (which dominates in the literature) and contrast it to the notions of fairness in other disciplines. In Section 3, we describe our overall methodology. In Section 4, we explore the premises of the literature on AF as a technical or a social construct. In Section 5, we argue that AF should be understood as a sociotechnical phenomenon. In Section 6, we employ this view to classify the origins of biases and propose directions for the IS discipline to mitigate bias.
---
Background on Fairness and Algorithmic Fairness
AF seeks to detect, quantify, and subsequently mitigate disparate harm (or benefits) across subgroups affected by automated decision-making (Barocas & Selbst, 2016). In this section, we review how fairness has been defined in AF (Section 3.1) and summarize the current discussion of how fairness has been conceptualized in other disciplines (Section 3.2). This allows us to discuss the relationships between AF and other notions of fairness.
---
Fairness as a Mathematical Construct
A technical approach to AF uses mathematical formalizations (or notions) of fairness.
The question What is fair? is reduced to a single mathematical expression. The mathematical notion of fairness is integrated into algorithms as a mathematical constraint or directly into the objective function. For illustration, we use a recidivism risk assessment example in which an ML system assesses the risk that a prematurely released prisoner will commit another crime. Such systems are widely used in the U.S., with COMPAS as a typical example. However, these systems were found to show systemic discrimination against people of color, making clear that it is necessary to study AF (Angwin et al., 2016;O'Neil, 2016). Key to AF is that there are different mathematical notions of fairness (for overviews, see Barocas & Selbst, 2016;Friedler et al., 2019;Verma & Rubin, 2018). These are loosely grouped into various concepts of fairness across (i) groups or (ii) individuals.
Notions of fairness at the group level build on a predefined sensitive attribute (Barocas & Selbst, 2016) that describes membership in a protected group, against which discrimination must be prevented. In practice, the sensitive attribute is declared a priori (e.g., by policymakers or IS practitioners). Systems for recidivism risk assessment use race as a sensitive attribute to ensure fairness across people with different skin colors (Angwin et al., 2016;O'Neil, 2016). Based on this sensitive attribute, group-level notions of fairness interpret discrimination by how the prediction model's outcomes are distributed across groups: inside and outside the protected group. An exemplary notion of fairness, statistical parity, requires the likelihood of events to be equal across all groups.
In our example, it requires the recidivism system to predict white individuals and people of color as posing a risk in the same ratio.
---
Figure 1. Algorithmic Fairness at the Group Level Defines Mathematical Notions Using the above Confusion Matrix
Other group-level notions of fairness rely on prediction errors (Hardt & Price, 2016;Kleinberg et al., 2017;Zafar et al., 2017), whereby the confusion matrix from Figure 1 is considered. For instance, one notion, equality of accuracy, requires ML to attain equal prediction accuracies across both groups. Specifically, it requires the system to keep both the ratio of individuals who do not pose a risk (and are recommended for release from prison) and those who pose a risk (and are recommended to remain in prison) the same across white prisoners and those of color. As can be seen in the different notions, there is no universal operationalization of fairness; further, it is mathematically impossible to fulfill all the different notions of fairness at once (Chouldechova, 2017;Kleinberg et al., 2017). Thus, the IS designer must choose an appropriate mathematical notion for assuring an unbiased outcome, yet one for which little guidance is available.
Explainable or interpretable ML (Caruana et al., 2020;Mitchell, 2019;Molnar, 2020;Senoner et al., 2021) can offer support by allowing practitioners to observe both the outputs of their models and the (potential) reasoning that support them. However, explainable ML often delivers insights that are not perceived as useful by persons (Molnar, 2020). Further, all notions of fairness share the same key requirement: within the data, the sensitive attribute must be available to algorithms, so that potential discrimination can be mitigated, even though collecting such data may be illegal owing to the sensitive nature of the attributes.
Notions of fairness at the individual level are based on the assumption that similarly situated individuals should be treated similarly (Dwork et al., 2012). This approach strives to ensure fairness independent of group membership. This requires a definition of similarity that is suitable in the given use case and provides the basis on which to perform pair-wise comparisons (rather than group-wise ones). In recidivism risk assessment, this requires two individuals whose relevant attributes (criminal history, sentence length, etc.) are equal to be subject to the same decision. As in group-level fairness, individual-level fairness requires access to sensitive attributes. It also leaves open how to specify which attributes are relevant or how to formalize relevant yet nonquantitative attributes (e.g., psychological instability or addiction).
---
Fairness as a Social Construct
Philosophical debates about fairness were originally driven by the question of the distribution of goods and rights and the utopian ideal of a fair ruler. Today's understanding of fairness suggests that all persons with equal gifts should have equal opportunities for advancement, regardless of their initial position in society (Rawls & Kelly, 2003). In short, equal distribution of chances for self-advancement to achieve equity in goods distribution (an individual's benefits are proportional to their input) dominates the current philosophical debate. Thus, it is unfair to prevent individuals from improving their situation by limiting chances -intentionally or otherwise -based on race, origin, or gender.
Law, criminology, sociology, and the political sciences often take a modern philosophical approach to addressing fairness. This leads to the emergence of restorative, transitional, or retributive justice concepts, in which society (re)establishes a fair distribution by repair, re-balancing of power, or punishment (Clark, 2008), in which fairness is neither given nor predefined. Instead, it is continually constructed (Lind et al., 1998). This line of reasoning stresses actionable notions of social fairness: constantly trying to balance several forces and preserve fairness. AF has emerged as a defining and affirming force. However, its mathematical limitations are new, compared to the social construct.
Anthropology focuses on the historical origins of fairness as an innate human value also observed in other primates (Brosnan, 2013). For instance, persons and primates who contribute the same amount of work as others but receive a lesser payoff are likely to stop working (Brosnan & de Waal, 2014). They tend to punish those engaged in the unequal distribution of goods (Brosnan, 2013). This suggests that the sense of fairness emerges from evolution toward collaborative societies. Since collaborative behaviors are relevant for survival, fairness propagates over generations (Hamann et al., 2011). Thus, primates may have a built-in fairness calculation mechanism. AF may look like an attempt to discover this mechanism, which is likely to include as yet unknown aspects that are hard to quantify.
Neuroscience looks for the origins of fairness in the human brain rather than in history. Brain processes concerning fairness occur in an evolutionarily old brain area (Vavra et al., 2017). This supports claims from anthropology and suggests that fairness is emotional and can be experienced as an intrinsic need (Decety & Yoder, 2016). A neurological rewards system is activated when goods distribution reaches a fair state (Vavra et al., 2017). However, this process is moderated by external factors, including similarity among affected individuals, situational identity, or salient personal goals (Bargh, 2017;Cohn et al., 2014). This may explain why people often disagree about what is fair. Thus, achieving fairness remains an ongoing process rather than a one-time challenge.
Finally, psychology sees fairness as a perception of an individual comparing themselves to other individuals whom they consider to be relational partners (i.e., as somehow similar to them) (Adams, 1963). This view was adopted (and extended) by organization science, with three aspects differentiated: distributive (who receives what?), reciprocal (is the treatment or benefit adequate to a person's input?), and procedural (how was the decision taken?). These aspects led to the formulation of organizational justice theory (Greenberg, 1986). Reciprocal fairness demonstrates that persons expect truthful and respectful treatment if they act accordingly (Bies, 2001). Procedural fairness requires that individuals affected by a decision be given provided with justification and be allowed to contribute to the decision and voice concerns (Greenberg, 1986). Organizational justice was adopted in the IS research and is often used to study fairness perceptions inside organizations and their relationships to technology-driven organizational change (Joshi, 1989;Li et al., 2014;Tarafdar et al., 2015). While all aspects of organizational justice may be affected by algorithmic bias, the distributive aspects receive the most attention (Robert et al., 2020).
---
Methodology
We engage in problematization so as to establish an informative research agenda for IS and the sociotechnical notion of AF. Problematization is an approach for developing research questions from a body of literature. We explicate the underlying assumptions in existing studies and question them (Alvesson & Sandberg, 2011). This frames research as an ongoing dialogue that relies on challenging the status quo rather than gap-filling.
Problematization is a way to facilitate more influential management and organizational literature theories (Alvesson & Sandberg, 2011). It has been promoted in the recent IS literature (Avital et al., 2017;Grover & Lyytinen, 2015;Templier & Paré, 2018) and was adopted in earlier IS studies (Ortiz de Guinea & Webster, 2017). Since the research into AF in IS is still emerging (Dolata & Schwabe, 2021;Ebrahimi & Hassanein, 2019;Haas, 2019;Kordzadeh & Ghasemaghaei, 2021;Marjanovic et al., 2021;Martin, 2019;Rhue, 2019;van den Broek et al., 2019;Wang et al., 2019) and relies on perspectives on AF from reference disciplines (most prominently, computer science), we chose to interrogate assumptions that characterize the AF discourse via a multidisciplinary review. Alvesson and Sandberg (2011) differentiated between various categories of assumptions that differ in depth and scope: in-house, root metaphor, paradigm, ideology, and field assumptions.
For more detailed descriptions of each category, we refer the reader to Alvesson and Sandberg (2011). We use this classification to assess the identified assumptions' impacts. This provides a solid base for theorizing AF in sociotechnical perspectives of IS.
The literature we analyzed was collected from different sources and was then classified. On the one hand, we screened articles from four key AF conferences (KDD, ICML, NeurIPS, FAccT). This led to 166 candidate articles for analysis. On the other hand, we conducted a query-based search across the top 25% of outlets from a multidisciplinary background (e.g., management, psychology, etc., according to Scimago Journal & Country Rank), and conducted a criteria-based selection within them. This led to the selection of 114 more articles. In a subsequent step, we classified all articles by their approaches to AF (technical vs. social), focus (social component, technical component, data and information, adaptation between components, broader context), scope (generic vs. limited to a specific application domain), and methodological paradigm (engineering, exploratory, literature review, critical, behavioral, formal).
Further, we listed and analyzed the implicit and explicit assumptions according to the identified approaches. The Appendix contains complete descriptions of the steps involved in the data collection and analysis.
---
Problematizing Algorithmic Fairness
In this section, we problematize AF. We explicate the premises of the papers that were classified as following the technical approach (Section 4.1), and studies that tracked a more social perspective (Section 4.2). We infer assumptions that underlie AF based on a systematic literature review (for details, see the Appendix). Specifically, we identified common assumptions in the literature, classified as having either a technical or a social orientation, and identified articles that can be used to exemplify the assumptions we found (see Tables 1 and2). When describing the assumptions, we refer to articles from the literature review. The analysis revealed that the articles lacked a shared, coherent agenda. Further, at first sight, the assumptions they make may even contradict one another. Not all the assumptions we list in the following sections co-exist in each paper we considered -the papers differed concerning the assumptions they rely on. While the literature, independent of its core approach, provided valuable inputs, it has not conceptualized AF as a sociotechnical construct, despite the overall goal of preventing technology-based societal discrimination shared across fields, perspectives, or attention foci. We don't intend to invalidate the research in the reviewed studies or to suggest the existence of irreconcilable camps. We argue that AF should be seen as a sociotechnical construct (Section 4.3). This perspective can reconcile the approaches to AF.
---
The Premises of the Technical Perspective
Identification of the assumptions that underlie the perspectives is the first step toward a unified, sociotechnical understanding of AF and an agenda for advancing the research in a more coherent, holistic direction.
The literature on AF has been dominated by the technical perspective.
Accordingly, AF has been defined as efforts to "translate regulations mathematically into non-discriminatory constraints, and develop predictive modeling algorithms that take into account those constraints, while at the same time be as accurate as possible" (Žliobaitė, 2017), or as "the aim of assessing and managing various disparities that arise among various demographic groups in connection with the deployment of ML-supported decision systems in various (often allocative) settings." (Fazelpour & Lipton, 2020).
While there is nothing wrong with the aim of obeying "non-discrimination constraints" or "managing disparities," this perspective is restricted by the underlying notion of fairness.
The technical literature relies on a range of paradigmatic assumptions, i.e., shared beliefs, definitions, and methodological approaches (Alvesson & Sandberg, 2011). We will now review these assumptions (see Table 1) along with appropriate examples from the reviewed studies.
First, the proposed solutions often assume a comprehensive a priori understanding of where and why biases occur (Abràmoff et al., 2020). The studies frame specific biases as problems in search of a solution. In a good engineering way, the studies focus on selecting a notion of fairness that is appropriate to this bias, then develop a mathematical construct to represent it, and use it in the algorithms. They posit that there is a mathematical or formal way to adequately resolve biases without generating new, previously unknown biases (Dutta et al., 2020). The studies follow the engineering assumption: "Each problem has a human-made technical solution. A solution is good when it solves the problem." Second, technical AF posits a conceptual equivalence of various notions of fairness. There are concurrent fairness ideals (equity, equality, need, etc.), each with varying social connotations (Binns, 2018;Narayanan, 2018;Rawls & Kelly, 2003). Still, most technical researchers treat them synonymously and select among them as if they were interchangeable. There is no consistency concerning what should be considered when selecting a fairness ideal, how the selection should be conducted, or what the longterm social consequences are of employing any of the measures. For instance, some studies have simply referred to court cases or have repeated arguments from ethics or political philosophy (Fazelpour & Lipton, 2020). Others have advocated including the selection process for an adequate notion of fairness in a participatory design process (Ahmad et al., 2020) or a survey (Srivastava et al., 2019b). These approaches have moved the decision on an adequate notion of fairness away from the designer or the researcher to the broader public, who is presented with various notions of fairness and is asked to choose among them. Yet, even in these cases, the implications of the choice have barely been considered. Depending on the population sampling method, this method can even exhibit bias. Further, the thought experiments and hypothetical situations used in these approaches were shown to often fail (Gendler, 2014). Overall, each selection approach makes an equivalence assumption, assuming that fairness ideals are to some extent equivalent and that choosing between them takes place in a closed environment, as
opposed to -open-ended -reality.
Third, having chosen a fairness ideal, the technical literature has posited a mathematical operationalization of it. Owing to their general and abstract formulation, many ideals that rely on equity or equality don't directly fit mathematical disparity measures (Fazelpour & Lipton, 2020), because the ambiguity of ethical and legal rules allow human judges to deliberate about a concrete situation, balancing the distribution of goods or rights. Developing a strict metric involves value assessments without a specific context or situation -often presented as a translation (Narayanan, 2018;Žliobaitė, 2017). This induces the illusion of mathematically expressing abstract notions without either losses or unexpected consequences. We refer to this as the translation assumption.
Fourth, the need to quantify a notion of fairness pushes researchers and practitioners to primarily focus on distributive justice (Fazelpour & Lipton, 2020). Yet fairness may go beyond the distribution of goods, to address interactional or procedural justice. If a system cannot be used by some people of color because facial recognition does not work properly for them, this impacts on their dignity and belongs to the interactional justice dimension (Celis Bueno, 2020;Hanna et al., 2020;Robert et al., 2020). Many technical studies implicitly reformulate this as a distributive justice concern (e.g., the distribution of properly recognized faces across groups). Further, the studies have focused on between-subject justice across groups or individuals (comparison with others) and not on within-subject justice (comparison with the subject's engagement).
This can generate perceptions of unfairness. An individual could be treated differently at two different points in time despite behaving in exactly the same way, because others changed their behaviors and the system adopted the altered distribution. These examples highlight the consequences of a distributiveness assumption, which posits that all fairness issues can be presented as a statistical distribution.
While a statistical engineering approach to AF has dominated the current debate, there are alternative approaches to assuring fairness through technical interventions.
However, they suffer from further assumptions and often commit to one or more of the flawed assumptions (such as the engineering assumption or the translation assumption).
The counterfactual fairness discourse has pursued a view of AF that uses directed acyclic graphs to model social biases that may occur in data (Coston et al., 2020;Kusner et al., 2017). This line of research assumes that social bias is a global causal structure element and needs to be explicitly modeled in algorithmic decision-making (Kusner et al., 2017). Alternatively, some researchers see the origins of bias in data and treat the problem as a database repair process. They work to achieve the desired balance and train the models using a sort of ideal, bias-free dataset rather than real data (Salimi et al., 2020). While some biases ("foreigners commit more crime than locals") or protected groups ("people of color") may be easier to identify and explicate, others are more implicit, limited to a local community, or simply rely on urban myths and fake news ("small people drive large cars," "obese people lack self-discipline," etc.). Some members of society may feel stigmatized if social biases need to be explicated and presented in a model or data. Positing that the biases can be comprehensively explicated, many approaches fall victim to an explicitness assumption.
Finally, almost all engineering approaches to unfairness share independence assumptions. We identified three assumptions concerning independence: (1) contextindependence, (2) time-independence, and (3) component-independence, as follows:
(1) Context-dependence refers to whether a one-size-fits-all approach is replaced by developing tailored, problem-specific solutions for AF. Here, most technical studies use multiple 'synthetic' or 'publicly available' datasets that represent various independent decision problems (recidivism prediction, diabetes treatment, or creditworthiness). These studies developed and evaluated solutions with these datasets and claim applicability across situations (Celis et al., 2018;Coston et al., 2020;Valera et al., 2018;Zafar et al., 2017). This contradicts the evidence that proved that fairness assessments are highly context-dependent (Srivastava et al., 2019b;Wong, 2020;Zafar et al., 2017). (2) Timedependence refers to whether AF solutions consider dynamics of the environment. Here, technical studies on AF often rely on data from the past (U.S. Census from the 1990s) to prove their system's fairness, but prescribe future use (Bera et al., 2019). They see data and the decision context as static, committing to a time-independence assumption.
(3) Component-dependence denotes that individual components of an AF system interact with the environment. Previous studies that followed a technical perspective on AF focused on improving and testing a single classifier, ignoring other technical components that will interact with it. The classifier will be like a small gear-wheel in a larger technological system involving classifiers, data pre-processors, and interfaces to other systems. In a broader sense, it will become part of a complex sociotechnical system whose characteristics emerge not only from its parts but also from interactions between the parts and the context (Mitchell, 2009). Because "a system is more than the sum of its parts" (Ackoff, 1973), there is no guarantee that a system composed of fair parts will in fact be fair (Dwork & Ilvento, 2018). Thus, these studies commit to a componentindependence assumption. The three abovementioned independence assumptions prevail explicitly or implicitly, despite singular studies that paid attention to context (Chouldechova et al., 2018;Kallus & Zhou, 2019;Rahmattalabi et al., 2019), accepted temporal dynamics of fairness issues (D'Amour et al., 2020;Liu et al., 2018), or addressed technical systems in a holistic way (Dwork & Ilvento, 2018). They represent notable exceptions rather than the norm. Equivalence assumption: operationalizations and notions of fairness are equivalent and can be exchanged based on their performance.
"In legal scholarships, the notion of fairness is evolving and multi-faceted. We set an overarching goal to develop a unified machine learning framework that can handle any definition of fairness, the combinations, and also new definitions that might be stipulated in the future." (Quadrianto & Sharmanska, 2017) "While the problem of selecting an appropriate fairness metric has gained prominence in recent years, is perhaps best understood as a special case of the task of choosing evaluation metrics in machine learning." (Hiranandani et al., 2020) Translation assumption: complex, ambiguous notions of fairness or legal rules can be "Although the DI doctrine is a law in the United States, violating the DI doctrine is by itself not illegal; it is illegal only if the violation cannot be justified by the decision-translated into mathematical or statistical terms without loss.
maker. In the clustering setting, this translates to the following algorithmic question: what is the loss in quality of the clustering when all protected classes are required to have approximately equal representation in the clusters returned?" (Bera et al., 2019) Distributiveness assumption: representing problems of interactional or procedural justice in terms of distributive justice so as to facilitate statistical processing.
"Though helpful in seeing a systematic error, gender, and skin type analysis by themselves do not present the whole story. Is misclassification distributed evenly amongst all females? Are there other factors at play? Likewise, is the misclassification of darker skin uniform across gender?" (Buolamwini & Gebru, 2018) Explicitness assumption: existing prejudices, social biases, and protected groups can be known upfront and can be made explicit in the model (affects especially counterfactual modeling of fairness).
"We advocate that, for fairness, society should not be satisfied in pursuing only counterfactually-free guarantees. (…) We experimentally contrasted our approach with previous fairness approaches and show that our explicit causal models capture these social biases and make clear the implicit trade-off between prediction accuracy and fairness in an unfair world. We propose that fairness should be regulated by explicitly modeling the causal structure of the world." (Kusner et al., 2017) Independence assumptions: "We adopt surrogate functions to smooth the loss function and constraints, and theoretically show that the excess risk of the proposed loss function can be bounded in a form that is the same as that for traditional surrogated loss functions. Experiments using both synthetic and real-world datasets show the effectiveness of our approach." (Y. Hu et al., 2020)
• context-
---
The Premises of the Social Perspective
The social perspective on algorithmic (un)fairness is becoming increasingly important.
Positions with a social perspective on AF are appearing at computer science conferences and in journals (Barabas et al., 2020;Binns, 2018;Mulligan et al., 2019;Robert et al., 2020) as well as in outlets from other disciplines, including the social sciences (Hoffmann, 2019), philosophy (Mohamed et al., 2020;Wong, 2020), and criminology and law (Barocas & Selbst, 2016;Helberger et al., 2020;Završnik, 2019). This makes clear that researchers across disciplinary boundaries are engaging in the social aspects of AF and are trying to understand it as a problem that cannot be completely solved through technology. For instance, they indicate that sources of algorithmic unfairness go beyond the issue of unbalanced data and derive from a lack of political power balance (Barabas et al., 2020;Mohamed et al., 2020) or the lack of political discourse about what fairness is (Wong, 2020;Završnik, 2019). Other studies have described the status quo from the perspectives of social science (Helberger et al., 2020) or organizational science (Robert et al., 2020). The multidisciplinary debate aims to identify and overcome the limitations of the technical approach. Most studies refer to a common-sense image of ML and, in the context of problematizing (Alvesson & Sandberg, 2011), suggest the presence of different assumptions that fall under the concept of root metaphors. The assumptions relate to a general understanding of the subject matter shared beyond a single discipline; in this case, it is the shared notion of ML and algorithms engineering as being solely about data and their processing. We will now review these assumptions, listing them in Table 2 along with examples from the reviewed studies.
The studies that follow a social perspective have addressed differences between the mathematical notions or even methods used to represent fairness in algorithms. While the technical community has developed a wide variety of methods for reducing discrimination in ML (e.g., counterfactual reasoning or debiasing for textual data), they were rarely considered in social discourses as a separate way of operationalizing fairness (Binns, 2018). However, as discussed, these methods may have crucial implications for both the technology's design and for the sociotechnical context: some methods require explications of potential social biases, while others rely on the availability of sensitive data, and yet others manipulate the data to reduce risks of bias. This explicates that social discourse of AF treats technical approaches to AF as a black box, without decoding the notions of fairness encoded by the engineers and their social, political, or organizational implications. Although the social perspective acknowledges AF-related progress of the technical perspective, the engagement with the subject matter has remained superficial. It often seemed that technical AF was reduced to a generic algorithmic approach. Given that the variety of technical AF and the interplays between specific social and technical measures remain unpacked, we refer to a black box assumption.
Some studies have engaged in a bad actor debate: they try to identify who is to blame for unfairness in automated decision-making. There are two general lines of argumentation: some argue that the application of ML in high-stakes decision-making is the problem (algorithms or big data is the bad actor) (e.g., Barocas & Selbst, 2016;Završnik, 2019), while others argue against developers, designers, and organizations who provide these solutions (Kuhlman et al., 2020;Mohamed et al., 2020;Wong, 2020). We call the first tendency the technology agency assumption, and the second the human agency only assumption. Others propose a shared responsibility: "humans and algorithms co-conspire in upholding discrimination." (Hoffmann, 2019).
We agree that understanding liability for discrimination is important to countering it with regulation. It is necessary to understand who or what the source of discrimination is. Nonetheless, some aspects of this debate may benefit from acknowledging recent developments in the technical approach to AF, for instance that decisions concerning fairness measures are often taken through participation of a broader public (M. P. Kim et al., 2020;Srivastava et al., 2019b). While we understand that it is important to identify the origins of biases, the discourse often does not reveal how exactly the bad actor in question impacts on the algorithmic decision-making and which technical components are affected -yet such deliberation could support the technical AF community in approaching the key points (Ågerfalk, 2020;Draude et al., 2019;Hoffmann, 2019;Ziewitz, 2016).
Finally, several authors, especially in legal science, have idealized the status quo in law enforcement and have declined algorithmic support (e.g., Beyleveld & Brownsword, 2019;Huq, 2018;Johnson, 2020;Završnik, 2019), while others have proposed replacing the current system with a 'code-based' sentencing (Chandler, 2019;De Filippi & Hassan, 2018;Kalpokas, 2019;Lessig, 2000). Each side has committed to a purity assumption, claiming that only it yields unbiased decisions. However, there is cumulated evidence that unfair decisions have been made in the past, independent of whether or not they relied on human reasoning or algorithms (Gladwell, 2019;O'Neil, 2016;Payne et al., 2017). Biases in law enforcement may emerge from existing training, incentive systems, or archetypes in the organizations (Gladwell, 2019) -and these biases may be present in both persons and in algorithms. Instead of claiming that one or the other is better or fairer, one must acknowledge that "accountability mechanisms and legal standards that govern decision processes have not kept pace with technology." (Kroll et al., 2016). We claim that coordinating algorithm development with overall justice system development may lead to law enforcement with fewer systematic biases. "Research in algorithmic fairness has recognized that efforts to generate a fair classifier can still lead to discriminatory or unethical outcomes for marginalized groups, depending on the underlying dynamics of power, because a "true" definition of fairness is often a function of political and social factors. Quijano (2000) again speaks to us, posing questions of who is protected by mainstream notions of fairness, and to understand the exclusion of certain groups as 'continuities and legacies of colonialism embedded in modern structures of power, control, and hegemony'." (Mohamed et al., 2020) Bad actor assumption:
• technology agency unfairness emerges as the result of applying technology to the taking of decisions • human agency only unfairness always emerges because of humans; technology just perpetuates human biases or a power imbalance "Of course, not all work in this area reduces discrimination entirely to some set of 'blameworthy' humans behind the machine. Many discussions make clear that algorithmic discrimination can happen in ways that are unintentional or difficult to account for, for example when upstream social biases are reflected in training data in ways that may be difficult to predict. In these cases, biases are said to 'sneak in', 'whether on purpose or by accident', or in ways that only emerge over time." (Hoffmann, 2019) Purity assumption: seeing either the justice system or algorithmic decisionmaking as superior and as the reference point for fairness "Second, it shows why automated predictive decisionmaking tools are often at variance with fundamental liberties and also with the established legal doctrines and concepts of criminal procedure law." (Završnik, 2019)
---
The Need for a Sociotechnical Perspective
Overall, the current AF discourse suffers from assumptions that render exchanges between various approaches difficult. Especially the relationships between social and technical aspects of AF have largely been overlooked. The social and the technical perspectives both provide valid points and remedies. Although some articles have addressed the relationship between the social and the technical, they have rarely moved beyond identifying problems in each area. Thus, the efforts have not added up to a holistic and comprehensive solution. In our view, this results from a selective perception of what algorithmic (un)fairness is: some researchers see it as a technical phenomenon and seek solutions in technology; others see it as a symptom of discrimination in society and seek a remedy in changing the social structures that enable discrimination. Both approaches make valid points. They also don't directly contradict each other: at first glance, enhancing algorithms or manipulating data does not interfere much with political or social agendas. However, this development may confuse practitioners, decisionmakers, and society. Apart from clearly misguided decisions, such as when the lack of sensitive attributes in the data (a political decision) disables the use of most effective AF solutions (a technical decision), other problems may also emerge. Thus, we address why a sociotechnical perspective is needed: it discusses practical consequences of a potential sociotechnical framing of AF and therefore supports the overall statement that we urgently need a sociotechnical perspective.
Without a coherent perspective that acknowledges the interdependencies between the social and the technical aspects of AF, organizations may be reluctant to effectively tackle this problem. If they treat algorithmic (un)fairness as a purely technical problem, they may assume that adding a social element will sufficiently solve unfairness. They may believe that positioning an employee as a control instance who needs to sign off decisions made by the algorithm will sufficiently mitigate discrimination. This aligns with the human-in-the-loop claim, according to which introducing human control into algorithmic decision-making will prevent or limit unintended consequences of purely algorithmic decision processes (Brockman, 2019;Marjanovic et al., 2021). Following the sociotechnical perspective, we argue that this reasoning is problematic. It assumes that the human and the algorithm are distinct moral agents capable of making an independent decision, and it implies a picture of a righteous and critical person who is able to question algorithmic output or assess the focal situation. We argue that the algorithm and the person are not as independent as it may seem. Algorithms empower and constrain persons: they may direct human attention to only some aspects to be considered; they may require a specific decision output format; and they may require the person to take decisions in a decontextualized environment. Likewise, persons may inappropriately interpret the algorithm's output or take random, uninformed decisions, following the illusion that their opinion is just one out of several votes. For instance, it is known that employees rely on decisions taken by the algorithms and rationalize or explain them rather than controlling their quality and bias (Rhue, 2019). Thus, rather than calling for a human in the loop, we need to gain an understanding of human-algorithm ensembles as collective moral agents and thus respect the complex mutual influences between the ensemble's subparts (Verbeek, 2011(Verbeek, , 2014)). In our view, the sociotechnical perspective on AF is the first step in this direction.
The evaluation of complex decision processes involving persons and algorithms as parts of an ensemble requires an overall approach to assess whether the ultimate outcomes produced by the sociotechnical system are fair and to identify reasons for potential unfairness. If one focuses on the technical and the human components separately, they ignore unexpected interferences between the components, risking an unfair final decision. Finally, without holistic guidance, companies risk choosing a combination of incompatible technical and organizational fairness measures, especially if such decisions are made by different units (e.g., IT and human resources). Accordingly, we argue that it is crucial to view AF as a sociotechnical construct.
IS has a long tradition of systemic sociotechnical approaches to solving urgent and important problems. For instance, IS encapsulated the social and psychological effects of organizational implementation of new technologies in the sociotechnical concept of technostress (Tarafdar et al., 2017). IS is also concerned with supporting collaboration that uses a mix of technical and social componentscollaboration engineering (Briggs et al., 2003). Finally, IS offers a holistic understanding of trust as a phenomenon not limited to persons but also emerging in relation to technology (Söllner et al., 2012(Söllner et al., , 2018)). We see great potential for approaching fairness as a sociotechnical phenomenon. While decisions in all domains will increasingly rely on algorithmic processing of data and will involve ML predictions (Agrawal et al., 2018), they will also involve a person as a (co-)decision-maker, target of the decision, or evaluator. It is essential to understand interactions between technical and social components to embrace the complexity of fairness. While existing research makes valuable contributions to either technical or social aspects, the IS discipline -owing to its sociotechnical anchoring -can better understand how these efforts complement, depend on, and mutually influence one another. Nonetheless, the studies of AF in IS have resorted to the technical perspective and have focused on ways to improve algorithms through for instance more adequate scoring methods (Wang et al., 2019), better assessment of tradeoffs (Haas, 2019), or better understanding of current applications (van den Broek et al., 2019). This has led to some very recent suggestions or calls to extend the notion of AF to embrace the behavioral, procedural, and contextual aspects of algorithmic decision-making (Kordzadeh & Ghasemaghaei, 2021;Marjanovic et al., 2021). Although those extensions and calls are a step in the right direction, they have neither explicitly problematized the assumptions that underlie existing formulations of AF, nor addressed the dynamics of interaction and balance between the social and the technical aspects of AF. In our view, IS needs a reorientation toward the sociotechnical perspective if it is to provide a holistic understanding of AF as a phenomenon.
---
A Sociotechnical Perspective on Algorithmic Fairness
Clearly, neither a technical nor a social view alone is sufficient. We will now position AF as a sociotechnical phenomenon.
The sociotechnical perspective has formed the foundation of IS research for decades (Davison & Tarafdar, 2018;Lee et al., 2015;Sarker et al., 2019). It builds on the key insight that work involves interactions between persons and technology. Persons, including individuals and collectives, as well as the relationships among them or attributes thereof -including structures, cultures, economic systems, rituals, best practices, organizations, or social capital -form the social component (Lee, 2004;Sarker et al., 2019). The technology, including human-made hardware, software, data sources, and techniques that describe ways of using them to achieve human goals or serve human purposes form the technical component of a sociotechnical system (Lee, 2004;Sarker et al., 2019). The sociotechnical view stresses the mutual interdependency between the components, so much so that connections between them are reciprocal and iterative, but neither incidental nor nominal (Lee, 2004). The social and technical components engage in joint optimization to create a productive sociotechnical system (Sarker et al., 2019).
The IS tradition has also acknowledged that components should be treated equivalently regarding importance and impact (Beath et al., 2013). A sociotechnical account of AF requires careful consideration of how machines and persons can and should co-engage or collaborate to achieve fairness.
We will now first examine the characteristics of AF that suggest the sociotechnical lens is most appropriate to attend to it, providing arguments for why it is a sociotechnical rather than a social or a technical phenomenon (Section 5.1). We will then show how the sociotechnical view addresses the limitations of existing AF research (Section 5.2). We will refer to the rolling example of a recidivism prediction system and will use this example at various points to ease the understanding of an abstract matter.
---
Why is Algorithmic Fairness a Sociotechnical Phenomenon?
Multiple characteristics of AF position it as a sociotechnical phenomenon. First, the algorithm creation process is a social practice. Developing algorithms is to some extent a research activity driven by epistemic values, including consistency, accuracy, or generalizability (Laudan, 1968). Similarly, contextual values that replicate the developer's personal or humanist concerns are equally important (Friedman et al., 2013;Kincaid et al., 2007;Nissenbaum, 2001;van de Poel & Kroes, 2014). The developer's background may impact on their perception of what is fair and for whom. Since definitions of fairness relate to the stakeholders' interests, developers could tend to prefer some fairness measures over others (Narayanan, 2018;Wong, 2020). Second, algorithms inevitably impact on the lives of individuals, groups, and societies based on where they are used and what they are used for (Draude et al., 2019;Mohamed et al., 2020;O'Neil, 2016). A widely used algorithm for selecting healthcare system entry was found to discriminate against racial minorities, thereby affecting thousands of people (Obermeyer et al., 2019). Finally, algorithms have and are becoming the object of public debate around AF (O'Neil, 2016). Algorithms have long been an object of sociotechnical practice.
At the same time, algorithms are involved in fairness assessments. Decades ago, the justice system moved from narrative-based consideration of cases to prosecution that relies on ML techniques (Aas, 2006;Harcourt, 2015). For instance, algorithms are used to predict areas in need of policing, so as to automatically identify potentially criminal individuals online, or for analysis of biological or computer data acquired during prosecution (Harcourt, 2015). All these applications bear risks of discriminating: these systems' accuracy may be higher for some types of crimes or for some ethnic groups.
Similarly, digital technology was shown to restrict freedom of public administration -the mythical 'computer' rather than every officer or the organization was taking decisions about what a fair welfare subsidy is (Dolata et al., 2020;Landsbergen, 2004). Although the publicity identified humans as decision-makers accountable for fairness, in fact, the work was and continues to be distributed between social and technical components. As presented here, there is not only a need to consider AF as a sociotechnical phenomenon, but there are good reasons to do so.
---
Developing a Sociotechnical Perspective for Algorithmic Fairness
We will now develop a sociotechnical perspective for AF, and start by discussing basic constructs relevant to the sociotechnical view of AF. We will also provide examples from the case of recidivism risk assessment (Dieterich et al., 2016).
The sociotechnical perspective focuses on interactions between the technical and social components of an IS. For instance, the recidivism prediction case involves judges, penitentiary workers, inmates, and their attorneys, as well as institutions and law enforcement rules. All these individuals and collectives form the social component of the prison release decision system -they take decisions or are directly affected by them.
When individuals take decisions in companies or organizations, they widely rely on decision support systems. Such systems, like any technological system, have various components. In the AF case, the component for ML is particularly relevant (i.e., as it predicts the likelihood of a person committing a crime again after release). Through reciprocal interactions, the components achieve coherence (harmony, fit, joint optimization), which results in an effective IS (Lee, 2004). All the individuals involved in penitentiary processes and the tools they use are engaged in continual adaptation. They establish new work practices that allow them to take better decisions, while the algorithms are retrained based on decisions taken or are simply changed to reflect new rules or routines. Based on this, we propose that a sociotechnical view of AF assumes complex relationships between social and technical components, such that the working of the overall system cannot be derived from structure or internal processes of its components. This specifically implies that we cannot predict that an overall IS will become fairer by only improving the technical component's fairness.
An effective IS should lead to better instrumental and humanistic outputs (Sarker et al., 2019). Decision systems are typically employed to improve the decision accuracy while reducing the decision costs (Power, 2008). For instance, a recidivism risk analysis system should relieve the overcrowded justice system, reduce processing time for jail release applications, provide prison inmates with earlier decisions, and lead to more frequent application processing cycles. At the same time, it must obey ethical norms, including fairness. Thus, a sociotechnical view of AF assumes multiple interrelated or even contradictory outcomes beyond fairness. This implies that fairness cannot be seen as a unique goal. Instead, the overall system should be evaluated against multiple goals, including fairness, where fairness is a necessary condition but is not sufficient to ensure that the system is useful.
An IS is embedded in an environment -a larger social, economic, regulatory, or material context, which offers structures for the IS's operation (Briggs et al., 2010;Dourish, 2001). COMPAS, the aforementioned recidivism prediction system, was improved based on societal pressure from NGOs (Washington, 2018). Based on this, we propose a sociotechnical view of AF to assume a dynamic and mutual interaction with the context. This implies that the IS needs mechanisms to interpret and process inputs (e.g., a changing notion of fairness) or feedback (e.g., changes in the environment and reactions caused by its past outputs).
Apart from the classical elements of a sociotechnical perspective of an IS, we followed Chatterjee et al. (2021) by also considering information as a core element within a sociotechnical system. Given that data's role for achieving AF becomes inevitable, in our view, it complements the overall sociotechnical perspective we are pursuing here.
The recidivism prediction system relies on data about past inmates (personal data, criminal history, education, etc.) and their offenses. It also mines data about the inmate under consideration, as well as statistical and ML models that link the data. We propose a sociotechnical view of AF to assume information as a key to steer the interaction between the social and the technical components. Since data are neither neutral nor independent, they require a critical approach.
In sum, the proposed perspective on AF positions the decision-making as involving humans and algorithms at the very core of the system. The reciprocal interactions between these individuals or collectives and the technology are what enable the system to yield a decision. Information provided to the individuals and to the algorithms is what underlies and structures the decision-making. The system is embedded in the environment, which is affected by the decisions taken by the system and reacts to them. The proposed sociotechnical view provides tools to explore relevant aspects of AF.
---
How Does a Sociotechnical Perspective Surpass Existing Premises?
The sociotechnical framing of AF not only establishes a tool set to precisely understand and describe the nature and mechanisms of algorithmic discrimination. It also helps to overcome the limitations of previous approaches, as discussed in Sections 4.1 and 4.2 and as presented below. This follows the strategy suggested by Alvesson and Sandberg (2011), who suggest reconsidering the problematized assumptions in light of a new theoretical perspective. We will now revisit premises identified in the literature and compare them to the sociotechnical view.
First, a sociotechnical perspective suggests that solutions to algorithmic unfairness problems may not function properly if they don't factor in social components and dynamics of adaptation among the components. This wholly contradicts the engineering assumption: presenting a classifier that produces a less biased output is not yet a solution.
We can claim that a solution is successful only if the entire sociotechnical system achieves a state of coherence in which it generates fewer biases.
The proposed sociotechnical lens approaches equivalence assumptions by acknowledging that notions of fairness and their operationalizations could be adequate depending on the overall system's state. The technical literature often sets out to identify the best notion among many. This attempt is destined to fail in complex and dynamic environments where the operationalizations' fits may vary. The proposed perspective makes it clear that it is not possible to explicate biases to be addressed upfront (as with the explicitness assumption or the engineering assumption). Algorithmic unfairness emerges as an undesired system output. Measuring performance against biases that were known a priori (i.e., came as the input) is also not sufficient. An audit against all possible biases would be necessary for the output.
Finally, the sociotechnical perspective overthrows all the independence assumptions and the bad actor assumption, because all components (including the environment) are involved in creating unfairness or assuring fairness. Through ongoing mutual adaptation, the process is highly dynamic. The process of analyzing and assigning responsibility for insufficient fairness cannot be reduced to a single component.
Translation and distributiveness assumptions relate to mathematics as the primary tool for representing fairness. While this may be true and necessary for the technical component, humans have dealt poorly with mathematical formalizations, especially in the context of distributions (Kahneman, 2011). Since perceptions of fairness are at the heart of being human (as claimed in anthropology and neurology), humans are highly unlikely to engage in mathematical calculations while making fairness assessments. However, when technical components 'speak' mathematics and humans do not, mutual adaptation can be hindered and can prevent the system from being jointly optimized. While the proposed framework does not directly relax this assumption, it makes clear that this point requires attention.
Sociotechnical systems require understanding all the processes involving data, technology, and humans. These components will differ in every application of ML and AF. Human practices and behaviors are situated (Draude et al., 2019;Orlikowski, 2008), which makes a great difference in the fairness of a recidivism prediction system or other systems, for instance for assessing creditworthiness. Because the environments differ, the components differ; thus, the outcomes also differ. This makes clear that reducing AF to a single approach, following black box assumptions, is inadequate, because it omits numerous technical approaches and because suggests that these situations are comparable. Finally, our proposed perspective interrogates the purity assumption. The technical and societal contexts provide inputs that prescribe the workings of a system and observe system outcomes. However, in many cases, the reactions of the society or the justice system to the occurrence of algorithmic unfairness was sluggish. Unfair systems are in constant use. Thus, it is misleading to idealize technological solutions or parts of society as being under the influence of a purity assumption.
The assumptions we identified negatively influence the variety and applicability of measures against unfairness. The sociotechnical perspective reveals that social and technical components are equally involved in discriminating and in assuring fairness.
Only when one attends to how these components interact -i.e., how decisions emerge from the mutual adaptation and optimization between humans and algorithms, and how these interactions relate to the instrumental and humanist outcomes the system should produce -will we be able to effectively address AF. To overcome the current lack of a sociotechnical perspective on AF, we will now propose research directions for creating a sociotechnical knowledge base on fairness.
---
Directions for Sociotechnical Research into Algorithmic Fairness
The sociotechnical perspective on AF overcomes assumptions present in the literature and offers a framework for identifying sources of algorithmic bias. To date, sources of bias were either related to the ML workflow (Feuerriegel et al., 2020;Vokinger et al., 2021;von Zahn et al., 2021) or ascribed to social biases (Mohamed et al., 2020;Wong, 2020). We will now address sources of bias identified in the literature (see Appendix) and classify them according to the components of a sociotechnical system or the interrelationships between them. As presented below, the origins of algorithmic unfairness are distributed across the entire sociotechnical system. We identify directions for IS research to eliminate various algorithmic discrimination types at their source.
However, these research directions are rarely limited to one component of the sociotechnical system, but instead call for a holistic approach to AF. For instance, to eradicate biases that emerge through low-quality information, one must address the social and organizational processes related to data generation and management. Similarly, to overcome technological limitations, one may need to ascertain the perspectives of various stakeholders. Accordingly, we will now address the identified sources of algorithmic unfairness, discussing how a holistic, sociotechnical perspective helps to resolve them.
Specifically, technology may be responsible for specific algorithmic discrimination types but, as opposed to the technical perspective, we claim that potential solutions relate to social and organizational practices or procedures. We claim that algorithmic unfairness can only be effectively addressed from a sociotechnical perspective and propose lines of research to substantiate this standpoint. What sorts of core theories should be employed to inform the design and evaluation of technological artifacts that then need to obey fairness and potentially other ethical values? Which measures should be used to ensure that the system does no harm? How can we standardize evaluations and make them applicable to both practitioners and researchers? How do we conduct those evaluations in realistic contexts without risking real harm to test subjects?
The evaluation and auditing of technical solutions requires both appropriate metrics and procedures as well as insights into the algorithms and their decision routines.
Based on these insights, the designers could identify risks of bias even before testing the system. End-users could easily review system decisions based on the applied procedures and the considered data. The explainability or interpretability of ML is a subdiscipline of its own (Molnar, 2020), and despite many efforts from several scientific communities and practitioners, its progress and practical uptake remain limited (Caruana et al., 2020;Mitchell, 2019;Sejnowski, 2018). While the discourse on explainable AI has reached IS (Ågerfalk, 2020;B. Kim et al., 2020), it remains nascent. The fairness example makes it clear that we need technological systems that reflect and explain themselves and their actions. Research outside IS has focused on two stakeholder groups that particularly benefit from ML explainability: developers, who -based on insights -can improve their model without costly experimentation (primarily in the ML literature) and end-users, who should become able to enter a dialogue with the machine (primarily in the humancomputer interaction literature). Here, the existing studies have identified additional stakeholders who depend on a deep understanding of what algorithms do: (1) high-stakes decision-makers who rely on predictions, such as governments who try to adapt plans to worldwide developments such as pandemics or global warming and refer to ML-based predictions, but also need to explain their decisions to the public; (2) auditors of medical application tools, who decide whether or not an application is a medical product, (3) judges and others in the justice system. IS can contribute to this discourse by identifying and specifying domain-dependent and context-dependent requirements for explainable ML.
Who are the stakeholders that rely on a deep understanding of ML technologies? What explanation type is needed for each of them? How can a system reflect on its predictions concerning fairness or other fundamental values? How do we combine the autonomy of a technological system with the requirement to make everything explicable?
---
Information as a Source of Algorithmic Unfairness
The information that embrace data and models is another technical source of algorithmic unfairness. Data not only perpetuate social biases encoded in the data generation process, but may also introduce new biases (through missing entries, an imbalanced representation of features, or inadequate scales to represent the features) (Buolamwini & Gebru, 2018;Saleiro et al., 2020). Many studies have proposed solutions such as re-weighting or filtering and balancing the datasets (Kallus & Zhou, 2018;Yang et al., 2020) or adapting algorithms, including specific pre-processing and post-processing steps (Gebru, 2020;Samadi et al., 2018). They attempt to mitigate unfairness in the information subsystem by adaptation in the information/technology subsystem. In recidivism prediction, unbalanced data introduced more racial bias than biased data (Biswas et al., 2020). However, balanced and complete data are not easily available. ML solutions require vast amounts of data and often rely on datasets that were neither developed for use in ML nor with a specific focus on fairness, but that were collected for other documentation purposes.
While organizations globally possess, collect, and process large volumes of data, only a fraction of the data is accessible to ML researchers and practitioners. This creates obstacles to the development and evaluation of fair models and algorithms. While data governance and management have a long tradition in IS (Counihan et al., 2002;Goodhue et al., 1988;Kettinger & Marchand, 2011;Otto, 2011), the proposed framework and practices pertain mostly to issues of investment and value generation in the organization.
They rarely thematize how to lever the value of data via ML and how to ensure that outcomes are fair. AF discourse calls for putting these issues on the agenda so as to extend the data governance and management literature in IS.
Which data management practices are adequate to generate fair data or make existing data fair? How do we incentivize organizations to follow those practices? How do we ensure fairness in combination with other desired features such as consistency, integrity, or security? How can existing data be made available to researchers engaging in the study of AI for humanist goals?
---
Interactions between Social and Technical Components as a Source of Algorithmic Unfairness
Algorithmic unfairness emerges owing to mutual adaptation between social and technical components. Humans often buy into the myth of infallible or objective algorithms and data. In other cases, we deliberately move the responsibility for decisions to 'the computer' and rarely question it (Orr & Davis, 2020). In doing so, they co-create unfairness (Hoffmann, 2019). However, adaptation in the opposite direction can also generate discrimination. Data, models (including causal models), algorithms, and preprocessing and post-processing routines are created and curated by humans and can perpetuate their individual social biases. COMPAS's model was trained using historical data from a law enforcement system dominated by white officers, who may have made decisions using an implicit or explicit racial pattern (Gladwell, 2019;Starr, 2014). The model adapted to this pattern and reproduced it more broadly (Kirkpatrick, 2017).
Without systematic human audits, given the human tendency to outsource tough decisions, the system discriminated against prison inmates based on their race.
Interactions between social and technical components yield appealing fields for IS research. The IS community is interested in studying the division of work between humans and AI, using terms such as "machines as teammates" (Seeber et al., 2020) or the "future of work." (Ågerfalk et al., 2020). While this research often presents AI as a way to empower human workers (Nolte et al., 2020), AF requires both empowerment by machines and empowerment against machines. While we don't see machines as evil, there are situations in which humans require a sense of self-efficacy and sovereignty to disagree with and push back against a machine. This can be achieved through adequate control mechanisms in machines, through assigning the "last word" to humans, or through partnership and dialogue between the two.
---
What is the appropriate division of work between humans and machines in high-stakes decision-making? How do we establish a partnership decision-making process between humans and machines? How do we incentivize workers and organizations to critically
question machine predictions? How do we prevent 'groupthink' in human-machine teams?
Answering these questions can contribute to a more differentiated discourse on machines as teammates; it can also establish a more dynamic adaptation relationship in the ML domain.
Similarly, IS has addressed technology development as a sociotechnical endeavor (Hassan & Mathiassen, 2018) Further, IS has for decades dealt with dynamics and agency in complex systems, with a special focus on AI (Ågerfalk, 2020;Fang et al., 2018;Wastell, 1996).
Autonomous technologies as parts of sociotechnical systems introduce a new agency type beyond humans (e.g., actor-network theory) (Ågerfalk, 2020;Rose et al., 2005). Many other frameworks and theories for analyzing behaviors, social practices, or changes in an organization ascribe agency solely to humans (Karanasios, 2018;Karanasios & Allen, 2018;Rose et al., 2005). The AF discourse supports the need to review and update frameworks and provide robust definitions of a nonhuman agency so as to facilitate the discourse within IS and adjacent disciplines. This can help to turn AF discourse away from bad actor arguments toward a shared solution-oriented effort.
How do the agencies of human and nonhuman entities impact on the generation of values in sociotechnical systems? How do the agencies interfere with each other? What are the intended and unintended consequences of nonhuman agency? How should fairness be guaranteed, despite agency being distributed across multiple human and nonhuman actors?
Finally, IS intends to develop stable systems, i.e., ones that reach the optimal fit between the technical and the social components. Borrowing the notion of entropy from thermodynamics, Chatterjee et al. ( 2021) framed the level of discrepancy or nonalignment between the components in an IS as entropy. In the case of AF, entropy grows when the technical component does not yield fair results as expected by the social component. This mismatch leads to an unstable, incoherent overall system. Following the thermodynamics metaphor, one could say that the system heats up. This relationship points to a larger discourse relating to the application of AI: value alignment (Russell, 2019). Whenever the values that underlie the technical and the social components differ, the performance of the overall sociotechnical system declines. Chatterjee et al. (2021) claimed that information acts as a moderator in this situation: coherent and useful information can cool down the system, while incoherent or contradictory information may make the entropy rise even further. In short, the quality of information introduced in the system is crucial for reaching a stable and coherent state. The literature suggests that AF requires high-quality data. Rather than adding more training data, it is more important to provide information with specific characteristics. On the one hand, this information may embrace carefully selected fairness constraints that fit the social components' expectations (Binns, 2018;Srivastava et al., 2019a). On the other hand, it may mean selecting or manipulating the training data accordingly by filling blind spots, improving the variety of the dataset, or ensuring that the training data fit the application environment (Kazemi et al., 2018;Miron et al., 2020). Other information types can likely help to achieve a fair state of the overall system, which offers potential for IS research.
---
Which information types reduce entropy in sociotechnical systems? How does one
identify the entropy level in algorithmic decision-making systems? How does entropy change over time in such systems? How can information transfer the desired notion of fairness, apart from mathematical formulas?
---
The Environment as a Source of Algorithmic Unfairness
Finally, interactions with the environment may introduce or perpetuate algorithmic unfairness. This can emerge as a result of governance issues (L. Hu & Chen, 2020; Kuhlman et al., 2020;Noriega-Campero et al., 2020), social order (Lepri et al., 2018;Mohamed et al., 2020;Rosenberger, 2020;Wellner & Rothman, 2020;Yarger et al., 2019), or public discourse around AI (Araujo et al., 2020;Helberger et al., 2020). For instance, Kuhlman et al. (2020) identified the lack of cultural diversity among ML researchers as the reason for algorithmic bias. He recommended exchange and feedback between ML researchers and members of underrepresented or protected groups as a remedy; they address the broader societal context's impacts on the sociotechnical system.
Here, influence in the opposite direction is also likely. Given that the notion of fairness is not static and is constantly being formed and disputed (Degoey, 2000), decisions taken by technological artifacts can change what society considers fair over the long term. This exchange needs careful analysis.
It is common knowledge in IS that the environment of an IS affects how technology is utilized and such utilizations' outcomes (Dennis et al., 2001;DeSanctis & Poole, 1994;Orlikowski, 2008). Notions of fairness are ingrained in these structures (Hufnagel & Birnberg, 1989) and are subject to change when structures change. IS has studied how work practices, organizational hierarchies, and economic structures change owing to technological innovation (Allen et al., 2013;Avgerou, 2001;Heracleous & Barrett, 2001). It is now necessary to understand how fundamental values change through the introduction of new technologies. Understanding the forces involved in this evolution are necessary to be able to predict how technologies will impact on society. While computer science researchers and practitioners are often confronted with accusations of developing and rolling out technologies without considering their negative effects (Tarafdar et al., 2013), they often lack tools to predict and analyze undesired consequences. The origins of AF are the best showcase for this. Thus, the IS discourse on the implications of technology use needs to be updated and needs to focus more on predicting rather than reacting to undesired developments. Having acknowledged the complexity of changes through technological progress, the IS community needs to step in as a moderator of these changes.
---
How do technology and sociotechnical systems impact on the fundamental values in organizations and societies? How can we govern technologies' impacts on society? How do sociotechnical systems interact with one anther to establish shared values? How do notions of fairness change through these interactions?
Feedback loops can introduce another source of unfairness. After deployment, an IS's output can influence the environment, which impacts on future inputs to the system.
It may introduce bias into the system or may have unintended long-term consequences for the environment (Barocas & Selbst, 2016). In our previous example, the system for assessing recidivism risk may draw on socioeconomic variables such as the previous income so as to predict a defendant's risk of committing another crime. Such a system may introduce a feedback loop: when identifying a high recidivism risk, the system may prolong an offender's sentence, and this longer sentence could negatively affect this individual's socioeconomic status. In this case, one can expect a lower income and thus an increased likelihood of a new crime. When the individual then commits a new crime, the system for assessing recidivism risk may then rely on the lower income and may again recommend a prolonged sentence, thereby diminishing the chances of the individual's early release. As seen in this example, the feedback loop is reinforced by the underlying ML system (Barocas & Selbst, 2016). Similar examples of feedback loops have been documented in online advertising, with female candidates being shown fewer ads for high-paid jobs (Datta et al., 2018) A system for recidivism risk assessment is shown here. The system grants an early prison release by using ML, with a higher salary (or income) predicting a lower recidivism risk, leading to an early release (shown by +; bottom). An early prison release also increases chances for social advancement and thus a higher salary (shown by +; top).
This leads to a reinforcing behavior (indicated by R).
In sum, algorithmic unfairness does not emerge solely from algorithms or data used for decision-making. It can arise out of ineffective or 'lazy' adaptations between technological and social components, or interactions within a larger context. Yet the research has focused on technical solutions and the solving of issues that emerge from interactions between components via technical solutions, rather than addressing the entire sociotechnical system or its context. As noted, a sociotechnical approach requires that one acknowledge the interactive and equal roles of social and technical components.
---
Implications
Positioning AF as a holistic, sociotechnical phenomenon has implications for IS practice and research. We see the following implications as particularly relevant and impactful. First, AF should be seen as a multidisciplinary endeavor in which researchers transcend the boundaries of individual disciplines, combining strengths from different disciplines to advance the interface of social and technical systems at risk of algorithmic (un)fairness. Second, AF is complex. Singular and punctual interventions work only for a short time until data, algorithms, or human perceptions change. It is crucial that legislative bodies and legal practitioners understand this. Otherwise, they risk multiplying specific rules on which data can be stored and how, or which measure to apply in the evaluation. This may not lead to de facto humanist goals and values such as fairness.
Measuring outcomes of sociotechnical systems is more appropriate than intervening in low-level processes within a sociotechnical system. Third, AF is a valuable objective and has real impacts on organizations. If the sociotechnical systems reach a coherent state without discrepancies between the social and the technical, the overall system's productivity will increase. This is only possible if notions of fairness align across the IS.
If employees struggle with notions of fairness in the software they use, they could refuse it or employ workarounds. Organizations should implement fairness-oriented solutions carefully, not just for society but also for themselves.
AF themes have implications for IS. Engineering education should both equip students with the necessary technological understanding and should cover humanistic, social, and behavioral dimensions. Practitioners should judge their artifacts critically. For academia, it is important to provide detailed descriptions of all datasets. This helps to assess the risk of biases introduced during data collection that can adversely affect decisions' fairness.
The insights we have presented have limitations that are typical of any literature review. Field portrayal is limited by the databases and keywords used, along with the filtering used in selection processes. Further, the summary of the articles we presented is limited; a complete list of references appears in the Appendix, and we encourage the reader to consider them in detail.
---
Conclusion
We claim that AF is just a precursor to a larger debate on value alignment relating to autonomous or semi-autonomous technologies. While an abstract value alignment discourse is emerging outside IS (Russell, 2019), the AF example shows that problems can become complex and fuzzy once confronted with reality. In particular, the reciprocal interactions between the social and the technical components, as well as their embedding in a larger environment, disrupt theoretically valid ideas. This is exemplified by the purely technical approach to fairness in automated decision-making. Here, state-of-the-art technical approaches cannot guarantee a fair outcome at scale. Similar effects may be expected when algorithms begin to directly affect other humanist values. Thus, it is crucial to formulate the problems of value alignment in a sociotechnical way from the outset. This means considering how social and technical components will change depending on the desired humanist outcomes, how the interdependencies between them can be employed to prevent undesired outcomes, how they can complement one another, and which moderators can effectively help them achieve a coherent state of low entropy, and how the broader environment is affected by the outcomes the system produces. To date, the debate on value alignment has focused more on creating rules to assure that algorithms don't overrule humans and obey human values, whatever they may be. This formulation forgets that social values and specifications thereof will change through interactions with technology, and will evolve and undergo adaptation, such that the social and the technical components will experience states of low and high entropy. The AF case makes clear that the system is dynamic and complex, and human values have not yet been articulated in a way to make algorithms simply obey them. AF could not be achieved with a simple constraint, with a range of constraints, or with supervised and unsupervised approaches. The debate on value alignment requires that one acknowledge that values are shaped and negotiated in sociotechnical processes. IS is predestined to contribute to this discussion its practical orientation, technical understanding, and sensitivity to societal progress. IS should pursue further research into sociotechnical AF before algorithmic (un)fairness turns into another 'dark side' of IT (Tarafdar et al., 2013).
titles and abstracts yielded 166 relevant articles. These articles were included in the subsequent analysis.
To create the multidisciplinary set, we employed the following procedure inspired by the systematic approaches to literature studies in IS (vom Brocke et al., 2009). ( 1) We composed a broad search query to accept all potentially relevant articles. The query we used looks as follows (presented using the Scopus syntax): (("*fair*" PRE/1 ("ML" OR "machine learning" OR "AI" OR "artificial intelligence")) OR (("algorithmic*" OR "AI" OR "ML" OR "machine learning" OR "artificial intelligence") PRE/1 ("fair*" OR "justi*" OR "bias*" OR "unfair*"))). This query accepts various phrases including "fair AI," "fairnessconstrained ML," "algorithmic fairness," or "AI bias." ( 2 The following criteria were used for selection of the relevant articles: (1) Is it an article with an individual contribution? (eliminating editorials, commentaries, calls for papers, or tutorials).
(2) Does it use the words used in the query with the intended meaning? (eliminating articles using "fair" to mean satisfactory or bright, mentioning "affairs," or using the abbreviation "ML" to refer to, e.g., maximum likelihood or milliliters).
(3) Does it refer to fairness, justice, or discrimination? (eliminating articles referring to algorithmic or social bias in terms of systematic deviation without implying unfair treatment of anyone). ( 4) Does it refer to machine learning, artificial intelligence, or algorithmic decision making as a source, a remedy, or an aspect of discrimination?
(eliminating articles referring to unfairness in general terms without establishing a link between technology and discrimination). ( 5) Does it make contribution towards AF or discusses it as a core aspect? (eliminating articles which refer to AF only as an area for future research or as motivation). Overall, the selection criteria were formulated and applied with caution to guarantee that the selected literature represents the broad discourse about algorithmic fairness.
Overall, 280 articles from various disciplines and including diverse viewpoints form the basis of this critical review: 166 in the conference set and 114 in the multidisciplinary set. The articles were subsequently analyzed as presented in the next subsection.
---
Literature Classification
Each article was classified according to four dimensions: (1) fairness perspective, ( 2 (1) The fairness perspective describes the approach towards or framing of fairness that dominates in the given article. Here we differentiate between social and technical perspectives. Some articles treat fairness as a social phenomenon and acknowledge the human origin of fairness. Others see it as a technical phenomenon, that is, as something that originates in the data or can be expressed in statistical or mathematical terms. We did not encounter a paper that commits to the sociotechnical perspective as presented in Section 5 and characterized by mutual interdependency, joint optimization, and equivalency between the technical and social components (Sarker et al., 2019). Several articles nominally refer to a sociotechnical perspective but they discuss AF in relation to the whole society, position it on a political level, and refer to frameworks provided by gender studies (Draude et al., 2019), colonialism (Mohamed et al., 2020), feminism (Wellner & Rothman, 2020), democracy (Wong, 2020), or human rights (Aizenberg & van den Hoven, 2020). Frequently, these papers use the term "sociotechnical" to highlight the contrast of their perspective with the technical discourse, but they focus on the social aspect of AF by framing fairness as an aspect of human society. Upon careful consideration, we thus decided to count such articles towards the social perspective.
Overall, the technical perspective dominates with 210 items (148 in the conference set, 62 in the multidisciplinary set), while 70 items characterized as following a social perspective (18 in the conference set, 52 in the multidisciplinary set).
( The framework we use for classification of the literature relies on the IS artifact concept offered by Chatterjee et al. (2021) for two reasons. First, it foresees the connection of inputs and outputs through processing within the system and a feedback loop with the environment (Chatterjee et al., 2021). Given the importance of the broad societal impact of the decision algorithms and claims that through interaction with the social environment those algorithms might reinforce disparity (O'Neil, 2016), differentiating between humans who are directly affected by the system and the general society might be useful for better characterizing the existing contributions. Second, the role of data for AF becomes inevitable, such that Chatterjee et al.'s (2021) attempt to include information as a component seemed beneficial for our purposes. We claim the agency of data becomes obvious in AF, where data frequently and without a clear provenance or an explicit creator "decides" upon an individual or a group's treatment (cf.
open corpora such as ImageNet). (3) The methodological paradigm describes the overall scientific approach of an article. The most frequent approach is subsumed under engineering, which involves formulation of a problem, conceptual or formal development of a solution, and evaluation of this solution against the identified problem and comparison with other possible solutions. Overall, 151 studies follow this approach (conference set: 121, multidisciplinary set: 30). 37 studies focus on exploring bias in a specific application domain, a data set, or a case (conference set: 16, multidisciplinary set: 21). These studies contribute understanding concerning the sources of bias or characterize it quantitatively and qualitatively. In addition, 30 articles rely on literature review as their main evidence Promotes/inhibits entropy (conference set: 14, multidisciplinary set: 1). Overall, the studies use a wide range of available methods, but more than 50 percent focus on engineering new approaches to address algorithmic bias.
(4) Finally, the scope describes whether the research was done in a particular application domain, i.e., is domain specific, or whether the paper claims generic insights going beyond a particular domain. Overall, we identified 77 domain-specific articles (conference set: 30, multidisciplinary set: 47). The domains include, among others, health, criminal justice, and loan allocation. The remaining 203 articles all make generic claims (conference set: 136, multidisciplinary set: 67). For instance, they propose a new technique or metric to prevent algorithmic bias, test it on multiple datasets, and offer it as a context-independent solution to algorithmic discrimination.
---
Literature Analysis
Having classified the literature according to the above categories, we then reviewed the papers to identify assumptions they rely on. We focused on the clusters emerging from the classification: starting with the largest ones (e.g., engineering papers with focus on technology and information subsystems and technical perspective on fairness) moving to medium ones (e.g., critical and argumentative papers with focus on broader context and/or social subsystem and social perspective on fairness) and on down to individual cases (e.g., an engineering paper designing a platform for use by the developers of AF solutions, i.e., social subsystem, to compare across various notions of fairness, i.e., technical perspective). We identified typical assumptions for the clusters and concluded that the assumptions map to the perspective on fairness that dominates in the papers. This mapping is reflected in the structure of the paper, which differentiates between the assumptions of the technical perspective and the assumptions of the social perspective.
We grouped similar or overlapping assumptions to offer a comprehensive presentation to the reader.
---
Analyzed Articles
---
Conference Set
---
Appendix
The Appendix details the procedure used in our systematic literature review. The overall objective was to characterize the discourse on AF and to identify potential limitations and assumptions typical for the discourse. We conducted the following steps: literature search and selection, classification, and analysis. At the end of the Appendix, we provide the list of all considered articles.
---
Literature Search and Selection
We selected literature from two sources: (A) Key machine learning (ML) conferences, which explicitly address the topic of algorithmic fairness (AF) to capture the most recent developments in AF, and (B) Query-based search in a multidisciplinary scientific database to capture the discourse on AF beyond computer science. We refer to literature from (A) and (B) as the conference set and the multidisciplinary set, respectively. They were merged before analysis. We only considered peer-reviewed articles and restricted our time span to January 2017 through December 2020.
To create the conference set, we proceeded as follows. (1) We selected outlets with key importance to the AF community in ML by consulting non-IS colleagues and the most recent peer-reviewed overviews of AF (Chouldechova & Roth, 2018;Pessach & Shmueli, 2020). We settled on the following four conferences: ACM Conference on 2) We reviewed all proceedings of these conferences between January 2017 and December 2020 including overall 9,392 articles (FAccT: 127 items, KDD: 1,286, ICML: 2,919, and NeurIPS: 5,060). For this, we screened the titles of all articles published in these conferences for the occurrences of words suggesting relevance to AF discourse, such as "fair," "justice," "bias," or "discriminate" (and their various morphological forms) leading to a pre-selection of 187 articles. (3) Based on the review of the abstracts, we selected articles that contribute to AF discourse; we also dismissed articles submitted as tutorials. The manual screening of
---
Article (Conference Set)
Fairness Perspective
---
IS Component
---
Methodological
Paradigm Scope Jabbari, S., Joseph, M., Kearns, M., Morgenstern, J., Roth, A. (2017). Fairness in reinforcement learning. In: Proceedings of the 34th international conference on machine learning, 701617 |
Background: Family members of people living with borderline personality disorder (BPD) experience a considerable objective and subjective burden. This article aims to report on a study that explored family members' lived experiences of having a sibling with BPD in South Africa. Method: This qualitative study used in-depth phenomenological individual interviews, supported by participant observations and eld notes for data collection. Data were analysed using Tesch's thematic coding. Results: Seven participants were interviewed, and three themes emerged from the collected data. The study revealed that participants experienced multiple challenges in understanding, gaining control, and struggling to cope with their own lives. Participants also experienced the impact of a lack of communication and education. Lastly, the study revealed that the participants used individual coping mechanisms to cope with having a sibling with BPD. Conclusions: This research illuminated the challenges experienced by family members of a sibling with BPD. These ndings provide a basis for recommendations for mental health nurses to promote the mental health of affected family members. | Background
Borderline personality disorder (BPD) is a serious mental health challenge worldwide. Globally, the incidence of BPD has been estimated to be about 1-3% of the general population, [1] but little data is available on the number of patients with BPD in South Africa. The most prominent characteristics of people with BPD are pervasive disturbances of interpersonal relationships, self-image and affect. These characteristics are illustrated by marked efforts to avoid rejection, which can lead to identity disturbance, impulsivity, and unstable and intense relationships. [2] Individuals with BPD make frequent use of health services and are di cult to manage without team supervision and support.
Family members of people living with BPD experience a considerable objective and subjective burden. [3] Fossati and Somma [4] agree that BPD places a heavy burden on people suffering from the disorder and those living with them. Moreover, Nouvini [5] con rms BPD often leads to tumultuous interpersonal relationships, where those suffering from the disorder feel invalidated and misunderstood by loved ones who believe them to be manipulative and immoral. People with BPD are prone to feeling angry and alienated from members of their families, while family members may feel helpless and angry at the way their siblings with BPD relate to them. [6] Therefore, living with a person with BPD may cause widespread disruptions in family members' routines. [7] It has also been determined that family members of relatives diagnosed with BPD have ineffective coping strategies related to a lack of communication skills and knowledge, and at times they nd themselves having to manage situations for which they are not prepared. [8] Family caregivers of people diagnosed with BPD's experiences have been studied by Hoffman, Fruzzetti and Buteau, [9] Buteau, Dawkins and Hoffman, [10] Lawn and McMahon, [11] and Kay et al. [8] Various challenges, such as negative feelings towards their relatives, social humiliation, nancial strain, marital discord, caregiver and nancial burden, grief, and isolation, were some of the similarities in these studies. [8, 9, 10, 11] Barr et al. [12] also examined family members' or carers' experiences supporting someone with a personality disorder. The authors [12] reported that carers described the importance of early assessment and intervention for personality disorders. In support, Greer and Cohen's [13] research focused on the partners of individuals with BPD, who experienced emotional challenges, dual roles as both a romantic partner and parental/therapeutic gure, and a lack of control.
Individuals with BPD pose a challenge to their siblings, as BPD affects not only the person with the disorder but also those around them. [7] This notion is supported by Kovacs, et al., [14] who claim the mental health problems of one family member in uence the whole family system, including sibling relationships. Furthermore, siblings may not identify themselves as carers and therefore feel unable to access health services themselves, even though they play a signi cant part in providing support for their brother or sister. [15] The Oxford Learners' Dictionaries [16] de nes a 'sibling' as one of two or more individuals sharing one or both parents in common. In most societies throughout the world, siblings often grow up together, thereby facilitating the development of strong emotional bonds. In this article, a sibling refers to a sister, a brother, or an adopted brother or sister of a family member who has BPD.
There seems to be a gap in the literature, as previous studies have not clari ed siblings' caring relationships and experiences in the South African context. The aim of this article is thus to report on a study that explored family members' lived experiences of a sibling with BPD in South Africa. Insight into these lived experiences could provide recommendations for mental health nurses to promote the mental health of family members affected by this phenomenon.
---
Methods
---
Participants
A qualitative, exploratory, descriptive, and contextual design was applied [17] to capture the essence of family members' lived experiences of a siblings with BPD. Purposive sampling to used to select information-rich participants who met the criteria of having a sibling with BPD. The participants had to have lived with their siblings for most of their lives. They were 18 years and older, either male or female, and were willing to participate in the study. Participants were recruited from a psychotherapy unit that admits people diagnosed with personality disorders. The setting of this psychotherapy unit was a mental health hospital in Johannesburg, South Africa. The researcher introduced the study to the patients admitted in that psychotherapy unit with a diagnosis of BPD, and obtained their written permission allowing interviews with their family members and providing their contact details.
Potential participants were informed of the purpose of the study and invited to participate telephonically.
In line with the ethical requirements of universities, ethical clearance was obtained with the following reference numbers: HDC-01-44-2016, REC-01-135-2016 and M181130. Written informed consent was obtained from all participants before the interviews commenced. Nine potential participants were approached, but only the seven who agreed to participate were interviewed. There were three female and four male participants aged between 22 and 49.
---
Procedure
Data were collected using in-depth phenomenological individual interviews, supported by observations of participants, and eld notes kept by the researcher. The main question posed to the participants was: "How is it for you to have a sibling with BPD?" The researcher conducted interviews on a day and time suitable for the participants in 2019. The interview venue was an o ce at the mental health hospital, free from interruptions. Appropriate follow-up questions were asked during the interviews as required, using communication skills such as probing, re ecting, clarifying, and summarising. The interviews ranged from 37.9 minutes to 55 minutes, were audio-recorded and transcribed verbatim. Any information that was personally identifying was removed from the transcripts.
---
Data analysis
Data were analysed to understand participants' lived experiences using Tesch's thematic coding [17] method. The researcher also adhered to Husserl's descriptive phenomenological approach, [18] which meant she bracketed or put aside her own preconceived opinions. For data analysis, all transcripts were read carefully while making notes as they came to the researcher's mind. Similar ideas were clustered and then organised as major topics, unique topics, and leftovers. The data (transcribed interviews, eld notes and observations) were coded, units of meaning were identi ed and linked together to form themes with supporting categories. Direct quotations from the participants were included to support the identi ed themes. An independent coder, experienced in qualitative data analysis, also analysed the data, and consensus was reached between the researcher and independent coder after discussion. Themes were then presented to participants for validation to ensure that accurate meaning was captured.
---
Results
Three themes emerged from the data analysis. These are discussed in the sections that follow.
Theme 1: Multiple challenges in understanding, gaining control, and struggling to cope with their own lives
Participants reported that having a sibling with BPD put a strain on families as it affected not only the person with the illness but also those around them. Their reported experience was that their sibling with BPD was emotionally draining. Participants felt sad, frustrated, lost, powerless and angry due to the highs and lows of not knowing what to expect from their siblings. The participants expressed how their families were affected:
"It was affecting everybody in the family, my grandparents ... It was taking a large toll on all of us." (P1, 22yrs old, brother)
"It affected everybody as everyone felt disrespected" (P6, 37yrs old, brother)
"Seeing your sister like that is not fun, it's not nice and it affects you because half the time you ask yourself why, why?" (P4, 33yrs old, brother)
Participants were cautious in their interactions with their siblings, as they did not want to trigger their illness. They also reported their frustration at their sibling's poor cooperation. Others blamed their parents for not taking more urgent control over their sibling with BPD. Some participants felt resentful because they observed their siblings with BPD always got their way and did not realise their impact on others' lives. Participants were angry because their siblings with BPD did not think about how their behaviour affected those around them when they attempted suicide. The following direct quotation supported this view:
"I was angry with her for trying to kill herself, leaving us behind … I felt disappointed and was also sad that I was going to lose my sister over something that I don't even know." (P5, 22yrs old, sister) Some participants wanted to act in a vengeful manner so that their sibling with BPD could get an idea of how their behaviour and actions affected others:
"I actually feel like it's on purpose, to choose a dress that she would not look good in just to … show her a little bit of … But that is not me" (P3, 24yrs old, sister)
"My graduation is coming … but I have decided not to invite her" (P2, 25yrs old, sister) Some participants experienced joy and were relieved after their sibling's nal diagnosis, when a 'name' was given to all the chaos. These experiences of joy were reported as:
"Now that I understand living with her is easier because I have an idea of what the condition is about." (P5, 22yrs old, sister) "I read more about it and it did make sense now." (P1, 22yrs old, brother)
Theme 2: Impact of a lack of communication and education
Participants loved their siblings but wished for two-way communication channels where both parties could be heard and validated. Patients with BPD often present with a number of behaviours that are considered disruptive, such as causing self-harm, expressing violent behaviour, impulsivity, or suicidal ideation. These behavioural tendencies put the patient at signi cant risk to themselves and others if left unmanaged.
Participants appeared to need attention and encouragement from their parents in how they responded to their sibling with BPD. A lack of support from parents had an impact on participants' reactions to their sibling's illness. When they felt unsupported, they were more likely to respond negatively and develop resentment towards their sibling. However, if participants felt supported, they tended to contribute positively to their sibling's care. Parents' lack of support is expressed in the following direct quotations: "It's not really her doing … but my parent's … she gets a different treatment to us to a point. There is a ne line between treating her differently for her condition and favouring her" (P2, 25yrs old, sister)
"I feel very bad because she is the fragile one ... I can say umatebe (she is like an egg). So even if our parent's ght I try to pull myself together so that I can comfort her and my younger sister and be there for them" (P5, 22yrs old, sister)
Family members experienced that their relationships with the individual who had BPD were con icting.
Participants were aware of their sibling's lack of awareness of the impact of their behaviour on their family. Still, they yearned for calm discussion and respectful communication:
"If it was not going her way, we would have ghts and it was really not pleasant at all" (P3, 24yrs old, sister)
"She makes it obvious around the house: 'Please keep your distance' and puts that face that says, 'stay there.' We therefore need to have a strategy around how we approach her." (P4, 33yrs old, brother)
According to the participants, healthcare professionals did not communicate with or educate families regarding individuals' BPD diagnosis and the management thereof. Participants reported having di culties understanding what was going on as they (and their parents) were not informed about their sibling's illness; this caused great confusion amongst the family members. Sometimes, family members act as caregivers for the individual with BPD and are case managers during a crisis, yet they are rarelyif at all -included in the treatment plan when their siblings are diagnosed. Therefore, they may struggle knowing how to respond effectively to these individuals' problematic behaviours, like angry outbursts, self-harming acts, and expressions of a fear of abandonment.
Little support and education are offered to family members, and most have limited knowledge of the BPD treatment programmes that their siblings receive when admitted to the hospital. Participants reported they still do not know how to manage or support their siblings after being discharged. The family members were left to search for information themselves and resorted to using the internet to obtain information about the disorder. The following direct quotations are illustrative of this nding:
"We had to nd out on the internet what borderline personality disorder after the doctors told us her diagnosis. I still think it would help us as a family if they explained this illness in more details" (P7, 49yrs old, brother) psychiatrists and everyone had their own ideas they said it is Bipolar and all the medicines they gave her never did anything for her, it actually made it worse and I think that is why she became so aggressive I don't know" (P2, 25yrs old, sister)
"We didn't know what was wrong until we took her to hospital to see a doctor. That is when they told us about the disorder that she has and how we should go about dealing with her" (P6, 37yrs old, brother)
Theme 3: Individual coping mechanisms
Participants reported that having a sibling with BPD put a strain on families and they tried coping with the situation using different strategies. Some coping strategies included defence mechanisms such as suppression, avoidance, rationalising, blaming and projection. Family members often experienced subjective burdens or emotional consequences because of their sibling's illness. Participants used suppression to cope and explained they postponed dealing with their own thoughts or feelings, and put it all aside. Participants' use of suppression as a coping mechanism was explained as follows:
"We just have to bear with her and assist her as much as we can" (P6, 37yrs old, brother) "I'm enraged at her still today (jaws clenched, face turning red) but I will never say it to her but I do feel like that" (P3, 24yrs old, sister)
One participant used avoidance. Family members rejected and avoided contact with the affected individual, and some cut off the relationship and/or stopped talking about that person. A participant said: "I don't want her in my life and it's not a nice thing to say because she is my sister" (P3, 24yrs old, sister), re ecting her use of avoidance as a coping mechanism.
Some participants also used rationalising as a coping mechanism. Participants justi ed their sibling's acts and moods by reminding themselves they were vulnerable or ill:
"He was diagnosed with HIV, which I later then thought that could have been the reason why he was behaving the way he was" (P7, 49yrs old, brother)
"We didn't know what was wrong, but we felt that the illness began when she was at school because there she had all the freedom … we did not realise that she was using dagga" (P6, 37yrs old, brother)
A few participants blamed their parents for not controlling and disciplining the sibling with BPD. The use of blame is illustrated in these comments:
"Now that you live on your own it's nice but it's not the way I planned my life, so she basically ruined my life" (P3, 24yrs old, sister)
"She didn't think about us or her family who need her. She didn't even think about her children" (P5, 22yrs old, sister)
Participants also used projection as a coping mechanism. They found it hard to understand the cause of their sibling's behaviour changes. Some family members experienced mixed emotions of loving yet hating their siblings due to how they relate to them:
"I mean, this is my sister. I'm not supposed to want her dead" (P3, 24yrs old, sister)
"At home … (hesitating) I also think things that triggers her illness is that she is not happy with the environment that we are living in but there is nothing that we can do because we live in a tavern. Our parents don't have money to buy a house, so they rented a place somewhere and left us at home. I think that is one thing that triggers her. She also mentioned our parents' issue, our parents ght a lot that also affects her. My family is not a conducive family" (P5, 22yrs old, sister)
---
Discussion
This study explored individuals' experiences of having a sibling with BPD. Multiple challenges were experienced by the participants, including a lack of understanding, gaining control and struggling to cope with their own lives. Families and friends of an individual with BPD experience high levels of psychological symptoms, including anxiety and depression, objective and subjective burdens, and grief. [19] Moreover, family members experience negative feelings, despair, sadness and regret, humiliation, guilt, and shame towards their relatives diagnosed with BPD. [8] Family roles and relationships become strained due to the emotional challenges of having a sibling with BPD. Gi n [7] agrees and states that people appear less tolerant of their sibling's self-harming behaviour and are quick to express their expectations that they should take responsibility for their lives and behaviours. Uys and Middleton [20] further concur with the ndings of this study that patients and their families still receive very little information about mental illness. Often, they are not even told what the diagnosis is and sometimes vague terms like 'breakdown' are used. Also, in a focus group run by Dunne and Rogers, [21] family members reported they had to research the diagnosis for themselves using books and websites. They expressed a wish to be informed about how to effectively manage situations that arose with their loved ones.
The ndings of the study on which this article is based indicated that families struggle in their own daily lives and in dealing with their relatives with BPD. These experiences signal the need for mental health communities to become more knowledgeable about BPD and its treatments, the establishment of support groups for family members, and ways to communicate this information to those who need it, as proposed by Buteau et al. [10] This suggests mental health nurses would bene t from understanding individuals' experiences of having a sibling diagnosed with BPD. The mental health nurses could then develop material to educate and train families on how to manage their interactions with their siblings living with BPD more effectively.
A lack of communication and education was empasised by participants in this study. Interpersonal relationships suffer due to a lack of constructive communication and education on the disorder, resulting in family members understandably being tormented by the threat or perpetration of aggressive acts, as noted by Gunderson. [22] Participants' reactions varied from wanting to protect their sibling, to anger at the perceived attention-demanding aspects of their behaviour. Mental health nurses should therefore assist family members by providing support and referral information for mental health education. [22] Gunderson [22] further states family members should not assume the primary burden to ensure patients' safety. Instead, family members should contact professionals for help if there is a perceived threat of harm, or the patient has already engaged in self-harming behaviour. According to Choi, [23] when families contribute to a collaborative treatment plan and are empowered to participate in the therapy or treatment process, all participants in the family system potentially contributing to the problem may be assisted and effectively challenged.
Bailey and Grenyer [24] emphasise that the family environment has an important implication for the clinical outcome of patients with a mental illness. It has also been found that where parents focused their energy on actively caring for a child with BPD, their relationships with their other children became more distant. [7] Therefore, it is important when mental health nurses engage with families of people with BPD to emphasise the dynamics within the family and remain aware of how it impacts the whole family.
As stated, according to Uys and Middleton, [20] patients and their families receive very little information on the affected individual's diagnosis and treatment plan. This view is supported by Gi n, [7] who claim family members experienced meetings with health professionals and the treatment team were for the bene t of the clinicians, and often just fact-nding sessions for health professionals. In light of the challenges family members experience, Fossati and Somma [4] highlight that relatives of individuals with BPD should have the opportunity to receive state-of-the-art, evidence-based information on BPD and its available treatments to destigmatise the diagnosis and support the family's role in BPD development.
Therefore, adequate family interventions in BPD treatment programmes should be accessible and inexpensive.
Moreover, family members of siblings with BPD are likely to be involved in stormy, roller-coaster relationships, and as a result, may feel overwhelmed by the extreme, unpredictable feelings and situations, even when they do not suffer from any mental disorder themselves. [4] As illustrated in this study, family members may blame themselves for their relative's illness or for not being able to do more to help. This can result in emotional consequences, including anxiety, guilt, anger, frustration, despair, and hopelessness. [25] Ultimately, ineffective coping skills are attributed to a lack of knowledge among family members, preventing them from making appropriate choices in assisting their relatives diagnosed with BPD. [8] Similar ndings were reported in this study, and the inadequate coping mechanisms mentioned by the participants were related to their lack of knowledge of the disorder.
Based on this discussion, family intervention programmes are likely to create awareness of the different problems family members who have siblings with BPD encounter. Lawn and McMahon [11] determined that family carers of people diagnosed with BPD experience signi cant exclusion and discrimination when interacting with mental health services. Therefore, education for all health professionals is indicated, especially those who are likely to encounter BPD carers, to improve their skills and attitudes in working with people diagnosed with BPD.
---
Limitations And Future Research
Although data saturation occurred in the analysis of qualitative interviews, the study's small sample size may be a limitation, as other views may not have been represented. Further research focusing on the provision of collaborative care for people with BPD and their families could be conducted.
---
Conclusion
BPD affects the person diagnosed with it and everyone around them. Individuals face signi cant challenges when their sibling is diagnosed with BPD. They often have di culty coping with their sibling's demands while trying to live their own lives. They also experience a range of emotions in their quest to get control over the situation at hand while battling to live their own lives. Often, interpersonal relationships suffer due to a lack of knowledge and education about the disorder. Family members yearn for constructive communication and support to help them balance their lives and cope with the demands of having a sibling with BPD. Recommendations are proposed for mental health nurses who spend time with patients and families of patients with BPD:
Mental health nurses could play an advocating role in the multidisciplinary team caring for the individual with BPD. This advocacy role would ensure healthcare professionals communicate and provide education to families about the diagnosis and management of BPD.
Mental health nurses could establish support groups for families and patients with BPD.
---
The mental health nurses could establish family intervention programmes, such as family therapy, which would focus on con ict resolution. Couples' therapy for the parents of individuals with BPD could be conducted to ensure they are able to manage the challenges brought into the family when one child has BPD. These recommendations provide a basis for mental health nurses to promote family members' mental health.
---
Declarations
Ethics approval and consent to prior to the start of the study This study received ethics approval from the University of Johannesburg Research Ethics Committee (REC-01-135-2016) and the University of the Witwatersrand (M181130). All participants were informed of the aims and risks of the study and provided informed consent to participate.
---
Consent for publication
Not applicable.
---
Availability of data and materials
Data from the current study will not be made available, as participants did not consent for their transcripts to be publicly released. Extracts of participant responses have been made available within the manuscript.
---
Competing interests
The authors have no competing interests to declare.
Authors' contributions NN -Supervision, Writing, Review & Editing WC -participant recruitment, data collection, data analysis, and writing-original draft.
---
MP -Supervision
---
CPHM -Supervision
All authors read and approved the nal version of the manuscript. |
Background: This study reviews the attitudes and behaviours in rural Nepalese society towards women with disabilities, their pregnancy, childbirth and motherhood. Society often perceives people with disabilities as different from the norm, and women with disabilities are frequently considered to be doubly discriminated against. Studies show that negative perceptions held in many societies undervalue women with disabilities and that there is discomfort with questions of their control over pregnancy, childbirth and motherhood, thus limiting their sexual and reproductive rights. Public attitudes towards women with disabilities have a significant impact on their life experiences, opportunities and help-seeking behaviours. Numerous studies in the global literature concentrate on attitudes towards persons with disabilities, however there have been few studies in Nepal and fewer still specifically on women. Methods: A qualitative approach, with six focus group discussions among Dalit and non-Dalit women without disabilities and female community health volunteers on their views and understandings about sexual and reproductive health among women with disabilities, and 17 face-to-face semi-structured interviews with women with physical and sensory disabilities who have had the experience of pregnancy and childbirth was conducted in Rupandehi district in 2015. Interviews were audio-recorded, transcribed, and translated into English before being analysed thematically. Results: The study found negative societal attitudes with misconceptions about disability based on negative stereotyping and a prejudiced social environment. Issues around the marriage of women with disabilities, their ability to conceive, give birth and safely raise a child were prime concerns identified by the non-disabled study participants. Moreover, many participants with and without disabilities reported anxieties and fears that a disabled woman's impairment, no matter what type of impairment, would be transmitted to her baby, Participantsboth disabled and non-disabled, reported that pregnancy and childbirth of women with disabilities were often viewed as an additional burden for the family and society. Insufficient public knowledge about disability leading to inaccurate blanket assumptions resulted in discrimination, rejection, exclusion and violence against women with disabilities inside and outside their homes. Stigma, stereotyping and prejudice among non-disabled people resulted to exclusion, discrimination and rejection of women with disabilities. Myths, folklore and misconceptions in culture, tradition and religion about disability were found to be deeply rooted and often cited as the basis for individual beliefs and attitudes. Conclusion: Women with disabilities face significant challenges from family and society in every sphere of their reproductive lives including pregnancy, childbirth and motherhood. There is a need for social policy to raise public awareness and for improved advocacy to mitigate misconception about disability and promote disabled women's sexual and reproductive rights. | Background
The UN Convention on the Rights of Persons with Disabilities (UNCRPD) Article 1 defines disability as the result of long-term physical, mental, intellectual or sensory impairment, which in interaction with various barriers, restricts the individual's ability to participate in society on an equal basis with others. Disability is not the impairment itself but rather the product of attitudinal and environmental barriers [1]. WHO estimates that 15% of the global population has a disability. A higher prevalence of disability is reported among women in poor families in low-income countries [1]. The UNCRPD guarantees the sexual and reproductive rights of people with disabilities including their right to marry and have a family [2]. However, women with disabilities are too often prevented from enjoying these rights in many countries, including Nepal [3][4][5].
The literature suggests that society continues to undervalue women with disabilities, restricting their fundamental rights, including their sexual and reproductive rights and contributing to exclusionary practices by national governments, policymakers, and civil society. While women with disabilities have the same desire and legitimate right to become mothers as all other women, their childbearing and parenting ability is often brought into question [6,7].
Nepal ratified the UN CRPD in 2010 [8]. In addition, there is a range of national laws and policies addressing the needs and rights of persons with disabilities [9], however, many people with disabilities still experience discrimination, denial of their rights and unequal access to basic services [5,10].
Compounding this, patriarchal societies, such as that found in Nepal, have a strong gender bias favouring men. It is much harder for women with disabilities than their disabled male counterparts to engage in activities such as education, marriage, employment and political participation [11,12].
Marriage is an expected cultural practice in Nepalese society, however, studies reveal that it is challenging for women with disabilities to find a marriage partner due to societal misconceptions and assumptions that incorrectly see such women are burdens rather than contributors to families and society [6,11,13,14].
---
Stigma, stereotype and prejudice
The terms 'stigma' and associated social responses such as prejudice and discrimination are often used interchangeably in the literature. Goffman [15] identified stigma as a feature that discredits and makes the person experiencing it different from others. This phenomenon is often accompanied by negative stereotyping, rejection, loss of status and discrimination [16]. A number of factors such as lack of knowledge, superstition, belief systems and fear contribute to stigmatization leading to exclusion of people with disabilities.
In Nepal, as in many cultures, disability has a long history of being perceived negatively as a misfortune caused by the curse of God, or associated with sins in a past life [17][18][19]. The negative social attitudes and behaviours towards disability are expressed in a number of ways including the exclusion of persons with disabilities from social roles and activities [20]. Thus, people with disabilities are less likely to have access to education, employment, marriage or to be allowed to participate in political and social events. Feeling uncomfortable with people with disabilities, avoidance and maltreatment are reported as are other forms of the negative attitudes and behaviours [20][21][22].
Evidence also shows that attitudes towards disability may change over time and differ from person to person and culture to culture [20,21]. Attitudes also differ by type of disability, with those with more visible disabilities often facing greater discrimination and exclusion [14,20,23].
In this paper, we share findings from a study that focused on public beliefs and attitudes towards disability in rural Nepal with particular reference to the experiences of women with disabilities around sexual and reproductive health, specifically during their pregnancy, childbirth and motherhood.
---
Methods
The study was conducted in Rupandehi, a southern district of Nepal with a population of 880,196 of which 50.89% are female [24]. Out of 125 recorded ethnic and indigenous groups in Nepal, the study district population is comprised of upwards of 95 different groups and indigenous inhabitants including 28 sub-groups of Dalits, who are grouped together as a socially and economically disadvantaged caste group and considered untouchables. The majority of people (78%) live in rural villages, though the urban population is growing fast. In terms of caste breakdown by the 2011 census, the study district population was comprised of 25% Janajati (indigenous), 21% Brahmin and Chhetries and 12% Dalit. 1.12% of the district population are reported to have a disability [24]. The Nepal Human Development Report 2014 reported life expectancy at birth for Nepalese people at 70 years. The national Human Development Indicator (HDI) value is 0.541, while the HDI value for the study district is 0.498 [25].
The data reported in this paper is extracted from a larger, original study that the authors conducted to investigate maternal healthcare access for disabled and Dalit women in Nepal. The larger study followed the mixed-methods approach by which quantitative and qualitative data were collected simultaneously. The study collected quantitative data using a survey questionnaire while qualitative in-depth interviews and focus group discussions were conducted to understand the experience of disabled study participants, non-disabled women with a range of social and educational backgrounds from the same communities and the views towards disability of women who serve as Community Health Volunteers. This paper reports the findings from a sub-sample of the qualitative component of this larger study.
Women with disabilities, Dalit and non-Dalit women without disabilities and Female Community Health Volunteers participated in this study. Face-to face semi-structured interviews with 17 women with physical and sensory disabilities and six focus group discussions with women without disabilities in the study district were conducted to ascertain community attitudes towards women with disabilities. Four of the six focus groups were comprised of non-disabled women from the surrounding community selected to represent a range of different ethnic backgrounds and educational levels. The total number of this group was 42with groups ranging in size from 10 to 12. An additional two groups of Female Community Health Volunteers, comprising 6 and 8 participants respectively, were chosen with the help of local health facilities.
All participants were purposively selected, and the interviews and discussions were conducted in a natural setting. With the help of local non-government organizations (NGOs) and disabled people's organizations (DPOs) we sampled 19 women with physical, visual, intellectual, speech and hearing disabilities who had experienced pregnancy and childbirth. Two women, one with an intellectual disability and one with a hearing and speech disability were excluded from the interview because of the complexities involved in communication and in assessing mental disability due to the limited knowledge of the study team. An adapted screening tool from the UN Washington Group on Disability Statistics (short set) was used for disability assessment [26,27]. Interviews with women with disabilities were conducted individually in their homes.
The focus group discussions were conducted in four different villages with diverse groups to capture the views from multiple perspectives. To reflect the key social divisions within the area, both Dalit and non-Dalit women (two groups each) were included in the discussions. Dalit are considered 'untouchables' and are at the bottom of the caste hierarchy, constituting about 12% of the district population [28]. Two additional focus group discussions were conducted with the Female Community Health Volunteers to understand their service experience and views towards pregnancy and childbirth in women with disabilities. This was important as they play a key role in delivering basic maternal-child healthcare serve as the first contact at the community level. The number of interviews and focus group discussions were determined by data saturation.
Interview checklists and topic guides were used in conducting in-depth interviews and focus group discussions. The checklists and topic guide for focus group discussions covered participant's beliefs and values concerning disability; views on sexual and reproductive needs and marriage of women with disabilities (with particular focus on pregnancy and childbirth among women with disabilities); and their feelings and levels of comfort around women with disabilities. The interview guide for women with disabilities included questions on their own views and experiences in the family, society and workplace in regards to their disability, marriage, pregnancy and childbirth. The checklist and topic guides were field-tested and the first author, a native Nepali speaker, with the help of two local trained female research assistants, conducted the discussions and interviews.
The role of the research assistant was to obtain consent of participants and to take notes during interviews and discussions. Developing a sustained contact, we fostered a relationship with study participants and encouraged their contribution. Considerable effort was put into maintaining neutrality and balancing the power relationship between the researcher and the participants at all stages of the research process. All interviews and discussions were audio-recorded with the participant's written approval.
After completion of field data collection, we followed a series of steps before the analysis proceeded to the interpretive phase. The first step involved transcribing verbatim all the audio-recordings in Nepali and translation into English which was done by the first author and three other language specialist. Then the first author reviewed all transcripts and the interview notes, reading, rereading and reviewing for overall understanding. Following the framework method developed by Ritchie and Spenser, we then analysed data in five stages: familiarization; identifying a thematic framework; indexing; charting/mapping; and interpretation [29]. To ensure accuracy for inter-rater reliability, a second person, the senior Project Coordinator of the larger study, assisted in conducting the interviews, crosschecked the transcriptions, translations and data coding. At this stage, where no new concepts emerged from the further review and coding of data, we developed sub-themes and grouped together the concepts identified in the text based on their similarities and relationships to develop themes and subthemes (Table 1). The themes and subthemes were then analyzed in relation to the research questions and are described in the following section.
---
Results
---
Characteristics of study participants
The sampled study participants consisted of 12 women with physical disability, four with visual disability and one with a hearing and speech disability. The majority [12] of the participants were non-Dalit while five were Dalits. The ages of these women ranged between 23 and 35 years. All women were married and had personal experience of pregnancy and childbirth. Less than half (seven) reported that they found partners themselves. Over one-third of women had no formal education while four women had some college education.
---
Misconception and misunderstanding about disability
Participants who had disabilities reported that their disabilities were regularly regarded by others as a misfortune, and they frequently encountered inappropriate behaviour from neighbours and society. Women with disabilities reported being regularly humiliated, stigmatized and negatively stereotyped.
A woman with a physical disability expressed her frustration about how the community treats her due to their misconceptions about disability:
If somebody is going out and meets a person with disabilities, they sayit is bad luck, I saw the face of a disabled….. We are blamed if they are unsuccessful in work; this is the kind of discrimination we are facing. If we participate in any ceremonies and weddings, they say, 'Why did she come here? Everybody will see her and some bad things may happen.
-A Dalit woman with physical disabilities
---
Another woman with a visual disability stated:
There was an incident during my first baby. It was during "Teej festival" (Festival of Women) when I had gone to a fair. My baby was three and half months old. A woman there said that it was pathetic to see a blind person having children. I did not recognize the woman, but I got very angry. Why did I have to be a character of sympathy when everything was normal? Had the baby been in pain or had it been crying, such comment would be meaningful. I returned home without going around.
-A non-Dalit woman with visual disabilities Participants from disabled and non-disabled focus groups reported that folk beliefs about the sexual desires and reproductive capability of women with disabilities persist and that their sexual well-being is often neglected. In Nepali culture, women do not openly talk about sex and sexuality; however, as non-disabled focus groups were 'female-only' , discussions about these topics was more open. The participants in focus group discussions, none of whom had disabilities, stated that due to cultural and social mores their families and neighbours regularly spoke negatively about sexual desire and ability to conceive for women with disabilities. Only one non-disabled focus group participant raised the issues of rights and argued that people with disabilities have the right to have children. Many focus group participants agreed that people with disabilities have the same desires as people without disabilities. However, not everyone agreed. As one educated participant with a physical disability in her in-depth interview recalled, her own grandmother-in-law was suspicious about her ability to conceive:
We had a grandmother here but it's been about 2 years since she died; she used to keep on asking if I would have the baby so I guess she might have had that feeling. After 6 months of her dying, I became pregnant.
-A non-Dalit woman with physical disabilities When asked about adult relationships and intimacy, almost all the focus group discussion participants without disabilities stated that women with disabilities can have relationships, become pregnant and give birth, but that they would not be capable (had no ability) of caring for and rearing a baby. Some of the women with disabilities reported that their parents did not understand their emotional and sexual needs and never talked to them about marriage. Many of the focus group discussion participants believed that emotions and desires about sexuality and pregnancy for women with disabilities are the same as for women without disabilities. As two of the focus group discussants noted: I think that the desire for sexuality is the same for people with disabilities and people without disabilities but there are differences in problems and difficulties.
-FGD/non-Dalit Women …..of course they want to have a baby. Every woman wants to have a baby. People think that after having a baby, it will grow up and support. He will earn and feed the family later.
---
-(FGD/Dalit Women
Other focus group participants reported that people in the community have both positive and negative views towards pregnancy, childbirth and motherhood for women with disabilities: All people will not have the similar thoughts; some views in a negative sense and disgust; some say that she needs help for herself and how she rears the baby and some others show their sympathy.
-FGD/Non-Dalit women In a focus group discussion of female community health volunteers, one woman added an additional cultural interpretation. The meaning of giving birth, she said, for a mother is to be satisfied with all senses. If a mother cannot see the baby, hear the baby cry or play with them, then what would be the point of having a baby:
If they are blind, then it will be difficult. If they give birth, there will be a problem. Who will take care of the child? If they cannot hear the baby's cry, then what is the meaning of giving birth? It will be really difficult… -FGD/Female Community Health Volunteers One participant with visual disabilities expressed her disappointment that even after demonstrating her ability doing all household chores, some family and neighbours doubted her ability to care for a baby:
I did hear such comments and doubt on how I would take care of a baby when I myself could not see. But they had seen me doing all the household chores. So people had mixed opinion; some said I would take proper care whereas the others said I would not.
-A non-Dalit woman with visual disabilities Another widely held belief is that a mother's disability will usually be passed on to her baby. This was found to be a primary reason for negative attitudes among people without disabilities towards marriage and pregnancy in women with disabilities. Women with disabilities were often counselled not to marry or were not considered acceptable marriage partners because of this misconception. Some FGD participants firmly believed that the baby and subsequent generations would inherit any disability present in the mother, others disagreed. Few participants, however, demonstrated any knowledge of the fact that some types of disabilities were congenital and many others were not. As one participant noted: They should not give birth. The baby might also have a disability due to the disability of mother, so it is risky.
-FGD/Dalit Women One participant with visual disabilities expressed her frustration that this belief often discourages her from becoming pregnant:
People say disability is often hereditary. Since both of us were blind, everyone thought our life would be complicated with a baby. Some of the neighbours said we should not have planned for a baby and most suggested it would have been better if we had used family planning devices. I used to say to my neighbours that not all disability is hereditary; some could be and some not; whatever happens we will see….
---
-A non-Dalit woman with visual disabilities
The study participants reported that disability is the concern of the whole family, with society stigmatizing non-disabled family members as well and that this often complicates their own marriages and relationships. As one participant with visual disabilities stated:
When there is a person with disability at home, everything gets connected to him/her. For example, I am a blind person in my home, so when my elder brother was getting married, the issue of looking after me was raised by many. Also, people tend to think the baby to be born in the house will also be blind, people think it is heredity…. People often looked at the eyes of my brother's children; so it is obvious that they would talk about our baby.
-A non-Dalit woman with visual disabilities Societal and cultural beliefs exert a strong influence upon individuals, creating doubts and fears even if the individual is educated. For example, a well-educated participant with physical disabilities who did not initially believe her disability would be inherited, later developed doubts after talking to her neighbours: I had a fear that my baby would have the same disabilities as me when I heard things from the society. Because of the belief that we have in society, I had doubts in my mind.
-A non-Dalit woman with physical disabilities Some Dalit women without disabilities in the focus group discussion argued that all babies born to parents with disabilities do not acquire disability: …. they may have normal children. There are examples that the deaf have very clever children. Both the mother and father are deaf but their children are talent. In some cases, there could be heredity.
-FGD/non-Dalit Women Negative attitudes were also expressed in relation to identity. Many of the women with disabilities reported that on many occasions as a child, they were not given a name at all, but just referred to as their disability (i.e. the blind girl, and lame one). In the eyes of others, their identity was their disability. Many reported that they found this humiliating and an assault on their individual identity.
---
Neglected or ignored sexual and reproductive needs and the rights of women with disabilities
Marriage between people with and without disabilities was often not easy. The study participantsboth non-disabled and disabled -reported that marriage of a woman with disabilities is a complex issue. Factors include benevolent protection from parents who fear that another family would not treat their daughter properly; fear from the paternal family that the woman with disabilities would not be "good enough" for their son and would prompt malicious gossip; fear about conception, childcare and domestic responsibilities. Some FGD participants expressed the view that people with disabilities should be paired off with other people with disabilities.
Interestingly, the majority of the women with disabilities interviewed were married to male partners with disabilities. In addition, most of them had chosen their partner, as opposed to having an arranged marriage. This was in stark contrast to the social practices in the study area, where arranged marriages remain the norm. These participants reported that their families had not considered arranging a marriage for them; therefore, they had sought a partner of their own and lived separately from the extended family.
A smaller number of the participants with disabilities reported that their family members were positive and helpful about their marriage and pregnancy. A woman with visual disabilities reported that her mother-in-law and other family members regularly reassured her, saying that her husband with visual disabilities would be able to create a happy life for them:
Even my mother-in-law used to say that my husband would keep me happy no matter what, so she often told me not to worry. Even my great-mother-in-law was supportive and so were other family members.
-A non-Dalit woman with visual disabilities A few of the study participants with disabilities were women who had married a man without disabilities. They reported that their partners had married them expecting to acquire their parent's property as part of the dowry, which was an incentive for the marriage. However, they reported that these arrangements had not often succeeded, with further disputes concerning terms of the inheritance between the families, and subsequent breakdown of the relationship in many cases. One of the participants with disabilities, whose parents had bequeathed their property to her and who had married a man without disabilities described her experience:
My husband had been asking for this property to convert to his name but I didn't agree. Then he started torturing me. I could not live together with him and I was separated. It has been around 2 to 3 years now since we separated.
---
-A non-Dalit woman with physical disabilities
The study found that many families and neighbours perceived pregnancy and childbirth in a woman with disabilities as an additional burden:
It would be difficult if a woman with mobility problem (disabilities) gives birth. In such cases, it is better not to give birth. If the woman cannot take care of the baby, it would be difficult to those for giving birth as well. They will also have difficulty to care and rear the child. If she is blind or only the mobility disabled, she should give birth even for her own future support. It would be better to give birth as per the individual's physical ability.
-FGD/Non-Dalit women It would be as per the situation. Some love them and care more. But if they have given birth even with their severe type of disability, then the family or neighbours may perhaps look negatively and may feel disgust.
-FGD/Non-Dalit women It was found that women with disabilities faced enormous pressure from society's negative attitudes about their pregnancy and childbirth. On many occasions, women with disabilities being interviewed for this project stated that they themselves felt guilty and a burden, and faced discouragement in all aspects of life. Many respondents with disabilities reported that their family, particularly their mother-in-law, was not helpful during their pregnancies. However, after the baby was born, mothers reported that most mothers-in-laws welcomed their new grandchild:
Relatives and society view us as a burden to them and they think they have to look after us throughout their life. This opinion is prevalent in every person of the society. They think a blind person is incapable of doing every kind of thing. Maybe some people with visual disabilities do not get married because they do not want to. Nevertheless, people think they did not get married because of their blindness, nobody understands that even blind people have choices in life. Such things make us feel really bad.
-A non-Dalit woman with visual disabilities Another Dalit participant with physical disabilities stated that she often came across negative reactions from her neighbours. She would not be invited to neighbour's functions, as they considered her disability a burden, saying: …..why invite people with disabilities to the ceremony, instead of getting help from them. We have to care for them…they cannot do anything….they come, sit and only talk ……they are not helpful….
-A Dalit woman with physical disabilities
The same participant recalled that she faced more trouble from her family than from the neighbours during her pregnancy and childbirth. She reported that her mother-in-law was negative and totally unhelpful when she was pregnant, so much so that her husband brought her back to her own parents' home for the delivery:
Other family members said, 'We should feed her and take care of her child too, let her stay there.' My mother-in-law said, ' If I had given birth to you, I would care for you' , so I stayed 5-6 months with my mother. Nobody came from my husband's family to bring me back from my maternal home. When my baby started to crawl, my husband came to bring me back, without the permission of his mother. My mother said, 'I will not send my daughter if you cannot take care of her. I will care for her whatever I can.' She further described the fact that her sister and mother were supportive, cared for and counselled her, keeping her with them during her pregnancy and childbirth, while she was being badly treated by her mother-in-law:
My mother…she tried to convince me that many people (who have disability) do not get married, but you are lucky so you got married…. who could have known that your new family members would not care for you after marriage……. Sometimes I thought to commit suicide by taking poison even after conceiving.
---
-A Dalit woman with physical disabilities
In Nepal, mothers-in-law have a powerful influence over their sons' attitudes. As the woman above continued: I felt bad…I had given birth to a child that had added more trouble…I was tolerating the rudeness and bad behaviour while I was alone….but after having the baby, I had the additional responsibility to care for the baby. Nobody would marry me as well….I had pain and became restless by thinking all this. Somebody had talked to my husband so he came to take me back with him.
-A Dalit woman with physical disabilities Some participants said that having a child was part of a strategy to ensure future support for people with disabilities as a parent.
---
Rejection and exclusion by the family and society
As noted above, the study found that families of women with disabilities in the study population commonly denied the rights of women with disabilities to marry or have children in the first place. The reasons included family prestige, over-protection by the parents, lack of understanding about disability and the reproductive needs of people with disabilities, and misconceptions created by stereotyping and prejudice, including around the fear of inheritance of the disability, as noted earlier.
A Dalit woman with physical disabilities stated that her husband was blamed for marrying her and excluded for several years by his parents and relatives:
They did not talk to me and my husband for a year. They had scolded so much saying he should have searched (for) a non-disabled woman, why did he marry me….. they said I cannot plant paddy, cannot do other works, why he married with such a woman? They did not speak for a year with him too. Later they said to him that 'It was your fate, you did not follow what I said but married such (a person)'. But earlier they used to scorn us saying he would not have a child by marrying a woman with disabilities.
-A Dalit woman with physical disabilities Another participant with visual disabilities had a similar story. She chose her partner with visual disabilities herself and their marriage was initially rejected by the husband's family until it was clear the child had not inherited their blindness:
With the first child, the problem was that we had not been accepted by our home/ family as we got married ourselves. Moreover, people thought that our babies would also be blind. Only when they realized that the baby could see, then only was I taken home along with the baby. They bought a separate home in Bhairahawa and kept us there. Now it is different, we have very good relation with other family members. Earlier it was very difficult.
-A non-Dalit woman with visual disabilities Women with disabilities were asked about their involvement in major family decisions and attendance at neighbour's functions to understand their inclusion within as well as beyond the family. Few participants reported involvement in their family decision-making. The majority of respondents with disabilities also reported that they were not involved in the women's groups. Some who had been part of women's groups reported that they felt discriminated against, disdained or considered inferior, prompting many to leave such groups. One participant reported that the group specifically doubted her ability to make monthly savings contributions and did not invite her to become a member:
What should I say why they don't call when the neighbours go there. That is why I don't feel like going there and I will not go there…They might have the thought 'How will I get money to be in the group'.
-A non-Dalit woman with physical disabilities It was also apparent that many communities excluded women with disabilities from participating in ceremonies and rituals, considering their presence bad luck. One of the participants reported: Some people say it is unfortunate if they see us; some do not like us to be present in ceremonies and rituals considering us as a symbol of bad luck. If I go somewhere and anyone comments negatively, I do not go again. I have heard somebody saying She came herself in spite of sending other family members.
-A Dalit woman with physical disabilities Facing challenges due to powerlessness Some of the FGD participants without disabilities and many of the participants with disabilities in their in-depth interviews reported that women with disabilities are discriminated against in every sphere of life. Some participants with disabilities reported that Female Community Health Volunteers do not visit them, while women without disabilities are visited and counselled during their pregnancy. A few participants reported that whilst initially invited to attend women's group meetings, they subsequently felt ignored and their opinions disrespected, prompting them to leave the group.
Importantly, women with disabilities further stated that the discrimination is not only outside but is also within their homes. One participant with a disability described the discrimination she faced from her own family members during her pregnancy and childbirth:
There was so much….I am afraid to talk with anyone about those times, and the discrimination and troubles that I faced. I have to reassure myself and I like to take satisfaction because of my children. Both of us, me and my sister-in-law, delivered at home. Nobody helped me but the entire family cared throughout the 24 h while my sister-in-law gave birth. I was at my maternal home when I gave birth to my son and had good food, but with my daughter, they gave me cheap food.
-A Dalit woman with physical disabilities Another Dalit woman with physical disabilities reported that she was discriminated against at work by the neighbours due to her disability. She stated that her mother also frequently abused and discriminated against her before her marriage. Her mother continues to do so as she lives close by: There are two younger sisters, they love me but mother hates me. They are far away, so mother loves them. I am disabled and she does not love me! My leg became weak and my mother used to verbally abuse me; she said that it would be better if I had died.
-A Dalit woman with physical disabilities An FGD participant described discrimination and exploitation within her own family to a niece with hearing disabilities:
I have a niece who cannot speak well, she got married but people at her home didn't care for her. They thought deaf people should be given leftover food, as she cannot speak for herself. Such is the perspective of people.
---
-FGD/Female Community Health Volunteers
The study found that people in the society think that women with disabilities are weak and have no power. Such an environment creates feelings of helplessness and fear in the minds of women with disabilities. The participants reported many examples of violence, abuse and exploitation by the family members. As one of the study participants noted:
Sometimes I had such feeling. I felt as weak, not able to do anything. Even when people said something good, I felt they were saying it to humiliate.
-A non-Dalit woman with visual disabilities Both Dalit and non-Dalit women with disabilities reported facing challenges in the family and society due to their disability. However, Dalit women with disabilities stated their experience of disparities, exclusion and bad treatments in the society was due more to their disability rather than their lower caste status. A Dalit participant with disabilities expressed her dissatisfaction at being stigmatized and mistreated:
Being disabled is more painful….If I did not have a disability nobody would speak bad or painful words to me…I would not seek support or help from anybody….society would not consider me a symbol of bad luck and I would not be excluded.
-A Dalit woman with physical disabilities Some of the respondents reported that their husband or other family members abused them. One participant reported that her mother frequently abused her verbally and physically due to her disability.
…….helped by my brother-in-law. He has known all about me and my trouble, how I was suffering being scolded and beaten. I could do work and was also doing, but she (mother) used to beat me saying that I was sitting idly and eating, doing nothing.
-A Dalit woman with physical disabilities
---
Emotional support
Not all respondents reported negative attitudes to women with disabilities. Despite the negative social environment, a number of participants, both in focus group discussions and individual interviews reported that their families and neighbours were supportive and positive toward disabled people. Some disabled women specifically reported that their neighbours were kind, sympathetic and supportive during their pregnancy and encouraged them to go for services. Some also reported that Female Community Health Volunteers visited them at home during their pregnancy.
As one of the FGD participants stated: All people will not have the similar thoughts; some views in a negative sense and disgust; some say that she needs help for herself and how she rears the baby and some others show their sympathy.
-FGD/Non-Dalit women
---
Discussion
Findings from this study provide a range of insights from both women with disabilities themselves and from members of the families and communities in which they live. It is interesting to note that the culture and social attitudes towards women with disabilities was often reported as unfavorable, with misconceptions about disability in general indicating that negative social attitude towards disability prevailed in the study district. Findings revealed that many women with disabilities are stigmatized and discriminated against in various forms by society and even within their own families. However, importantly, while exclusion and negative attitudes were commonly reported by and about women with disabilities, the findings were mixed with some women with disabilities as well as some people without disabilities expressing attitudes that are more inclusive.
In relation to the negative attitudes and social behaviours towards women with disabilities, several key issues were identified and despite many people's openly prejudiced views, some degree of "benevolent prejudice" towards pregnant women with disabilities was also common. Issues regularly raised in FGD and interviews included the marriage of women with disabilities, their ability to conceive, give birth and safely raise a baby. Moreover, many respondents with and without disabilities reported anxieties and fears that their impairment would be transmitted to their babies and that pregnancy and childbirth of women with disabilities would be an additional burden for their family.
The study found little exposure to, and insufficient knowledge about disability among participants without disabilities, leading to blanket assertions, which resulted in discrimination, rejection and exclusion of people with disabilities. Many women with disabilities reported that they faced discrimination and humiliation as well as violence from their family members, particularly from their mothers-in-law and husbands. The study reflected more broadly, already established findings that women with disabilities live under various forms of oppression, which includes being denied opportunities and facing rejection, showing that women with disabilities are often not valued in Nepalese society and sometimes have no individual identity beyond that of their disability.
As in other societies around the world, myths, folklore and misconceptions about disability such as 'disabled people are tragic figures that society should pity' [30][31][32][33], were found commonly among the individuals without disabilities and community health worker groups interviewed. Consistent with this finding, the literature also shows that the negative attitudes more commonly exist among poor and education-poor communities [33][34][35][36].
Beliefs about disability expressed by the non-disabled participants in this study are also commonly found in the religious and folk beliefs in many traditions including Hinduism, Buddhism and Islam. For example, in India and Nepal, many people believe that disability is a punishment or curse from God. Moreover, people with disabilities are traditionally perceived as inauspicious and are often discouraged from attending religious and wedding functions [19,33,34]. While Hinduism has as a central tenet the concept of equality, the strong belief in reincarnation is sometimes interpreted to mean that people disabled in this life may have done something wrong in a previous life [19,33,37].
Exclusion was often expressed through patronising attitudes. These were often manifested through people in the community questioning the ability of women with disabilities to exercise their right to make key life decisions around marriage, pregnancy and childbirth. A number of factors such as inadequate knowledge about disability and the needs of people with disabilities, misconceptions and incorrect beliefs, as well as fear of contagion, the inheritance of disability and uncertainty about how to interact with people with disabilities, contributed to this negative attitude. The focus of this particular paper is the question of pregnancy, childbirth and motherhood among women with disabilities who already have one or more children. A linked but important additional question addressed elsewhere [5] is the access of women with disabilities to contraception and the availability of this access compared to that of their non-disabled peers.
Positive perceptions about the ability of women with disabilities to give birth and rear their children were minority views, however they did exist. And there was strong variation regarding these perceptions by disability type. For example, women with intellectual or mental disabilities were often presumed to pose a greater risk to the child than were women with other types of disability. The families routinely, although not universally, perceived the women with a disability as a burden since they assume this woman would contribute less to family chores and income. Such negative attitudes led to discrimination within families with little or no priority given to the needs of women with disabilities including their treatment, rehabilitation or other essential care required.
Issues related to the ability of women with disabilities to marry and doubts about their ability to give birth and rear children are consistently highlighted by studies conducted in countries such as India and Korea [33,34,[38][39][40], however, not all research is consistently negative on this. In contrast to our findings, another Nepali study by Simkhada et al. [19] found positive attitudes towards the rights of women with disabilities to marry and have children. Such contradiction in people's views is not surprising in a multi-cultural society like in Nepal. Moreover, this study looked at different groups with lower educational and awareness levels in a different part of the country than did Simkahada et al.
Significantly, women with disabilities themselves often shared reservations about their ability to successfully marry, become pregnant and raise children. While some had come to understand and appreciate their own ability and had some knowledge of new and changing attitudes regarding the rights and potential of women with disabilities, a number had not been reached by progressive ideas and attitudes regarding people with disabilities.
Evidence shows that negative attitudes towards disability are changing gradually [21]. This study reflected some of this changing attitude, with respondents reporting some positive attitudes towards people with disabilities, and respondents with disabilities reporting numerous examples of kindness and acceptance. Some of this is also based on individual attitudes and on familiarity with the disability from personal or family experiences. Whilst in this study disabled participants perceived these as positive experiences, it could be argued that these actions were more closely linked to paternalistic caring, rather than reflecting notions of equality and mutuality. However, increased education and levels of awareness among the public, changing socio-cultural contexts, and policy changes including Nepal's ratification of the CRPD and the development and passage of a number of related laws and policies in line with the CRPD, might also be influencing changing public views about women with disabilities.
It is important to note that women with disabilities showed not only vulnerability but a number of strengths. For example, many who felt that their families were unwilling or unable to find them marriage partners had identified and arranged their own marriages, often in the face of considerable opposition. This self-starting approach to marriage, which flies in the face of established custom, is worth a more in-depth discussion than can be provided here, but it is of note. Many disabled women reported wanting a child, deciding to become pregnant and seeking antenatal health care as well as support for childbirth, even though they knew or feared that they would meet with resistance and lack of support by some family, health care providers and members of the surrounding community. There was also an understanding expressed by some women with disabilities and as well as members of the broader community that in a very practical sense, having a child represents long term planning as it guarantees that the disabled womanas is true for many other women in the communitysome security and support in older age.
At the outset of this study, we hypothesized that women who were both disabled and Dalit would be doubly discriminated against. This was based on studies that state women with multiple vulnerabilities may face compounded discriminations [41]. Significantly however, this study found disability far outweighed class as a daily concern. Disabled women, both Dalit and non-Dalit faced similar challenges. Dalit women with disabilities consistently reported that they experienced discrimination due to their disability rather than their lower caste status. Non-Dalit women reported facing barriers in education, social inclusion and family life very similar to those reported by Dalit women. Further studies are needed to explore this intersectional issue in greater depth.
Finally, it is important to note that there was a mix of attitudes throughout the community, based on a range of factors including membership in different ethnic/ minority groups, personal familiarity with disability, education and individual beliefs and temperament. This mix of attitudes in the public arena, even in a remote area, is an interesting finding and one that generates recommendations for policy and practice. There is certainly a need to encourage social policy and information efforts to raise public awareness and improved education and advocacy campaigns to mitigate against misconceptions about disability and promote the sexual and reproductive rights of women with disabilities. But the range of attitudes and beliefs found also offers an important starting point for such effortsit may be possible to build on the best and most progressive attitudes towards women with disabilities already existing in the community. And such interventions must not only target the general community. Our findings show that many women with disabilities themselves need more information and support as they move forward through pregnancy, childbirth and motherhood.
We acknowledge several limitations associated with this study. The study was a part of a Safe Motherhood Project in Nepal, therefore, the study population was limited to one project district. Additionally, like all qualitative studies, our findings may not be generalizable to other areas with different social and cultural contexts. Furthermore, the views expressed by the participants reflect the attitudes towards disability in general rather than specific types of disability.
---
Conclusion
Although negative attitudes are prevalent among the public in the study district towards women with disabilities, their marriage, pregnancy and motherhood, we found a range of attitudes related to pregnancy, childhood and motherhood among women in the general public in this area of Nepal. Without doubt, women with disabilities face significant challenges from family and society in every sphere of life due to negative attitudes which reflects inadequate public knowledge and misconceptions about disability, stereotyping and prejudice. Yet there were also a range of positive attitudes expressed by focus group members that warrants further exploration and that could provide a starting point for positive changes in policy and programmes to better support women with disabilities who become pregnant in this region. And finally, it is important to emphasize that disabled women themselves faced a number of significant social and economic challenges, but also showed a range of strengths that must be supported and encouraged.
---
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author upon request.
---
Ethics approval and consent to participate
Ethical permission, for this mixed methods study (both qualitative and quantitative data collection efforts) was obtained from the Nepal Health Research Council (NHRC) -Ref. no. 1184 and UCL ethics committee project ID: 5260/001. Verbal and signed consent was obtained from all participants before interviews and discussions were conducted. In all cases, we explained thoroughly that their participation was entirely voluntary and the information obtained will be used for this research only. Confidentiality was maintained throughout the study by using number identifiers on audio recordings, transcripts and interview notes.
---
Consent for publication Not Applicable
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
We observed significant associations between household socioeconomic factors (poverty status, education) and salivary collection methodological variables (time since waking, time of day of sampling, physical activity, and caffeine intake). Moreover, lower levels of household poverty and education were significantly associated with more sources of potential bias in salivary collection methodological variables (e.g., longer times since waking, collections later in the day, higher odds of caffeine consumption, and lower odds of physical activity). Consistent associations were not observed with neighborhood socioeconomic factors and salivary methodological variables. Discussion: Previous literature demonstrates associations between collection methodological variables and measurements of salivary analyte levels, particularly with analytes that are more sensitive to circadian rhythms, pH levels, or rigorous physical activity. Our novel findings suggest that unintended distortions in measured salivary analyte values, potentially resulting from the non-random systematic biases in salivary methodology, need to be intentionally incorporated into analyses and interpretation of results. This is particularly salient for future studies interested in examining underlying mechanisms of childhood socioeconomic health inequities in future analyses. |
Introduction: Salivary bioscience has found increased utilization within pediatric research, given the non-invasive nature of self-collecting saliva for measuring biological markers. With this growth in pediatric utility, more understanding is needed of how social-contextual factors, such as socioeconomic factors or status (SES), influence salivary bioscience in large multi-site studies. Socioeconomic factors have been shown to influence non-salivary analyte levels across childhood and adolescent development. However, less is understood about relationships between these socioeconomic factors and salivary collection methodological variables (e.g., time of saliva collection from waking, time of day of saliva collection, physical activity prior to saliva collection, and caffeine intake prior to saliva collection). Variability in salivary methodological variables between participants may impact the levels of analytes measured in a salivary sample, thus serving as a potential mechanism for non-random systematic biases in analytes.
Methods: Our objective is to examine relationships between socioeconomic factors and salivary bioscience methodological variables within the Adolescent Brain Cognitive Development Study© cohort of children aged 9-10 years old (n = 10,567 participants with saliva samples).
---
Introduction
Socioeconomic factors or status (SES) that drive health inequities are well established (1)(2)(3). However, a thorough understanding of SES-driven health inequities is needed within pediatric populations to elucidate early-life biological antecedents of adult health inequities. Previous studies among pediatric populations demonstrate multiple salivary biomarkers implicated in associations between the broader social environment and physiology, including neuroendocrine markers (e.g., alpha-amylase, cortisol, DHEA), metabolic markers (e.g., insulin, glucose), and immune markers (e.g., c-reactive protein, cytokines) (4-7). However, a number of these biomarkers rely on invasive sampling techniques, particularly blood draws, risking harm to participant-researcher rapport and overall willingness of communities to participate in biomedical research, particularly among pediatric populations. One approach to address this research gap in biological measures among pediatric studies is the use of salivary biosciences.
Salivary biospecimen technologies have grown in popularity over the last decade within research studies and clinical testing to non-invasively measure levels of analytes within diverse human populations (8). This utility is primarily due to its contextual practicality, allowing for sample collection outside of laboratory or clinical settings, as well as the non-invasiveness and feasibility of saliva sampling relative to more invasive techniques, such as phlebotomy (9)(10)(11). The many advantages of collecting salivary samples over other types of biospecimens in research include (1) being a low-cost option particularly for studies requiring multiple samples, (2) the ability for a participant to self-sample, and (3) adaptability to various field settings (10,(12)(13)(14). This method offers increased feasibility to measure physiological correlates of SES and related factors given the non-invasive nature, ease of collection of salivary samples, and reduced cost of sampling (10). These cost-saving benefits afford strengthening of study design such as sampling from more participants, increased number of collections within participants, or increased number of biomarkers assayed from each saliva sample. Further, salivary bioscience demonstrates great potential for diagnostic capability including pediatric endocrine dysfunction, cardiometabolic disease (15), monitoring lithium levels for psychiatric disorders (16), and diagnosing COVID-19 at home (17).
Additional methodological strengths of salivary sampling allow for the inclusion of communities that have been traditionally underrepresented in research and eases the burden of participation for families, improving adherence (13,18). Certainly, a history of scientific injustices exists, disproportionately affecting low socioeconomic status and racially/ethnically minoritized communities, and driving historical and current-day underrepresentation in biomedical research that has often resulted in varying degrees of distrust of researchers (19)(20)(21). These historical and current injustices often occur when the cultural appropriateness of biological sample collection is not adequately considered (19,22). Salivary collection is a tool that can minimize cultural insensitivities inherent in the collection of biological data, given its acceptance among diverse adolescent communities (23,24). However, it is important to note that any biological collection can be precarious and warrants culturally and equity guided investigations. Some potential examples include: (a) some cultures or communities may feel averse to producing a saliva sample, particularly when observed by an experimenter, and may prefer other biological methods over saliva; (b) age of study sample matters, with children generally exhibiting aversion to blood sampling but willingness to produce saliva; and (c) certain cultures may perceive discarding unused saliva into waste as disrespectful. It is our recommendation that the community preferences for or against saliva collection be well understood before leveraging salivary biosciences.
Given these advantages, feasibility, and promising diagnostic future of salivary biosciences it is essential to first understand how the experimental design and saliva collection methodology should be standardized to ensure precision of measured analyte levels, particularly for the investigation of health inequities, and for increased application within pediatric research or clinical utility. Without this deeper methodological understanding, spurious differences in experimental design and methodological implementation of salivary biosciences may undermine the interpretability, accuracy, and utility of salivary analytes.
Several decisions in the experimental design can directly influence the methodology of salivary sample collections. For example, a design that rigorously standardizes collection of salivary samples can reduce or eliminate unintentional biases due to variations in collection methodological variables. These methodological decisions include how much time should be allowed between a participant's waking time to their saliva collection time, the time of day the saliva sample is collected, the amount of physical activity allowed prior to sampling, if caffeine is consumed prior to sampling, or other oral considerations that can impact measured analyte levels (5,8,12,25,26). Standardized collection practices help eliminate unintended experimental noise, where non-biological factors may influence the composition or volume of whole unstimulated saliva (27). Without stringent standardized collection practices of how and when saliva samples are collected, leveraging salivary biosciences on a large scale may result in unintended methodological variations, which can impact the analyte levels measured in the collected saliva sample and thus take a detour from true biological levels, warranting caution (28).
Many adrenal steroid analytes demonstrate diurnal/circadian or seasonal rhythms, marked by patterns of varying levels over an extended period of time. For example, cortisol, a marker of psychological stress, fluctuates throughout the day, peaking approximately 30-45 min after waking followed by tapering levels in the evening (e.g., 3-12 h after waking) (11,29). In addition, the amount of sunlight at various points of the day drives circadian rhythms (30). Waking later in the day when sunlight is different than morning light may shift circadian phases and thus alter typical patterns of analytes.
Not only is the time since waking important, the time of day when the sample is collected is also a source of experimental variation. For example, salivary dehydroepiandrosterone (DHEA) and testosterone levels are typically highest in morning samples and drop continuously throughout the day to produce lower levels in evening samples (31-34). In addition, DHEA is implicated in physiological responses to acute stress (35,36). Thus, saliva sampled later in the day may represent different hormonal profiles compared to morning collections given fluctuating levels with circadian patterns, or greater opportunity to experience acute stressors as the day goes on. Given these considerations, minimizing variations in collection practices or pre-collection exposures are important for making accurate conclusions about the source of differences in analyte levels. Variations in methodological factors may become increasingly problematic for obtaining precise measured analyte levels in maturing adolescent populations, especially where pubertal maturation is underlying the biological systems producing the analytes of interest.
Further, methodological variables related to lifestyle such as rigorous physical activity and caffeine intake prior to salivary sample collection may introduce bias in analyte levels by altering physiological states or the integrity of the saliva sample. Rigorous (>20 min) physical activity can alter levels of DHEA or testosterone (37), particularly in saliva samples taken during early stages of pubertal maturation when hormone levels are very low (38). Salivary DHEA levels among adolescent males have been documented to increase post-exercise, yet with varying slopes according to pubertal development (35). Caffeine intake prior to saliva sampling can impact analyte levels through a few different mechanisms, including shifting the salivary pH, increasing sample acidity, and therefore impacting the performance of certain pH-sensitive assays (5,39), or promote bacterial growth, thereby compromising the integrity of salivary fluid (40). In addition, caffeine intake may risk dehydration in the participant that would reduce salivary flow rate, and/or activation of physiological pathways that overlap with origins of the analyte of interest, such as caffeine activating the adrenergic pathway and increasing urine concentrations of metanephrine (41)(42)(43). Although these observations are in serum or urine samples, unclear evidence on correlations of serum/urine metanephrine with salivary levels as a function of caffeine intake warrants consideration of caffeine exposure in salivary collections.
Standardized collection practices can minimize differences between and within participants in these methodological variables by regulating time of day when the saliva sample is collected, prohibiting participants from consuming caffeine or performing rigorous exercise beforehand, and standardizing the duration of saliva sampling between and within participant sampling (25). Analytes closely connected to circadian patterns may be particularly sensitive to variability in sampling times, or alterations in pH levels due to caffeine consumption. The present analysis examined relationships with several salivary methodological collection variables in a large US-based, representative pediatric cohort participating in the Adolescent Brain Cognitive Development Study© (44). In the ABCD Study, detailed data was collected on methodological variables mentioned above, but was not standardized in the collection protocol allowing for our evaluation of potential non-random methodological variation relating to saliva collection and key socioeconomic factors.
Socioeconomic factors have been of central focus for understanding health inequities. Socioeconomic factors reflect access to economic or social resources and are often represented by individual or composite measures of household income level, poverty status, parental education attainment, or occupation (45). These factors have been described in the literature to influence child developmental outcomes. Low SES has been associated with poor school readiness and academic achievement, more frequent adverse experiences, structural brain differences, and altered executive functioning (46-50). Studies investigating the relationship with SES using salivary samples among children from low SES households have noted higher baseline neuroendocrine profiles and steeper neuroendocrine trajectories over time relative to children from high SES households (51,52).
SES has been purported to operate as a function of resource availability for a study participant (53). If collected at the home, participants may have limited access to freezers to store salivary samples, mailing resources to mail collected saliva, technology, such as text messages or phone, that would facilitate reminders to collect samples at consistent timings or more accurate collection time records without the aid of digital tools (25). Possible limited availability and access to social and economic resources may influence salivary sample collection variables when participants self-schedule throughout the day when to come into the laboratory for sampling. Thus, collections performed at a laboratory or at a study site issue the question whether collection methods differ as a function of participant resource availability.
Relationships between SES and other variables important in salivary collection, namely physical activity and caffeine consumption, have been demonstrated. Positive relationships between SES and the amount of physical activity performed among adolescents have been reported, such that low SES tends to be associated with less physical activity compared to those with a high SES (3,54,55). However, variations in the measurement of both SES (e.g., income-to-needs ratio, household income, parental occupation, parental education) and amount of physical activity (e.g., time or duration, frequency, schoolbased or extracurricular) may contribute to some null findings (55). Despite overall reductions in the amount of caffeine consumption among children and adolescents since 2000, those living at 0-99% and 100-199% of the federal poverty level have consistently consumed caffeine at higher rates compared to those living at greater than 200% of the federal poverty level (56). Particularly among children ages 6-11 years old, rates of caffeine consumption in households with low or very low food security and income-to-poverty ratios below 2.0 are significantly higher compared to households with income-to-poverty ratios above 2.0 (57). Thus, child/adolescent physical activity and caffeine consumption are a possible source of methodological variation in saliva collection when not standardized in the collection design.
Given that many analyte levels fluctuate on a circadian rhythm, patterns of saliva collections earlier or later in the day among one socioeconomic context relative to others in the study sample would suggest potential non-random systematic errors in salivary analyte values due to broader social determinants. Similarly, socioeconomicrelated differences in physical activity or caffeine consumption prior to salivary sampling may serve as another mechanism for non-random systematic errors in salivary analyte levels. Without disentangling these contributors, the inclusion of these salivary analyte values in analyses would bias conclusions regarding differences in biological outcomes. Thus, it remains important to capture a greater understanding of socioeconomic influences on salivary bioscience methodology before leveraging salivary data for accurate investigation of health inequities. The present analyses will inform how special considerations need to be made when leveraging salivary analyte levels from large multi-site studies in childhood, a critical period of development when inequities during early life developmental periods, "get under the skin. "
Investigations of the relationship between salivary collection methodological variables and socioeconomic factors among child populations are limited. However, with the emergence of salivary technology we are observing widespread utilization of salivary biosciences in large cohort studies. The objective of this study was to examine the association between key socioeconomic factors (e.g., poverty status, household education, neighborhood deprivation) and
---
Materials and methods
---
Background on study sample and sample characteristics
This analysis was performed using a sample of children aged 9-10 years at enrollment participating in a 21-site study in the United States from the Adolescent Brain Cognitive Development (ABCD) Study© Release 3.0. This dataset was selected given that it is a large-scale longitudinal (e.g., annually over the course of 10 years) pediatric collection of whole saliva via passive drool for analysis of several hormonal analytes (e.g., estradiol among females only, DHEA and testosterone among males and females). Although there have been three collection timepoints to-date in this dataset (e.g., enrollment/ baseline, year 1, and year 2), this current analysis focuses on baseline measures collected in 2016-2018 only. Longitudinal change was not the focus of the a priori aims, and any existing methodological variation observed at baseline are most likely repeated and similar in future waves of saliva collection in this cohort.
Participants reported to the study site for salivary sample collection, where one salivary sample was collected via passive drool from each participant at each annual timepoint (58). Participants and their guardian/parent did not receive prior instruction to prepare for the saliva collection during the study visit (e.g., participants were not instructed to abstain from eating, caffeine, or vigorous exercise prior to study visit). Upon arrival at the study site, a minimum of 30 min time passed between participants' arrival and starting the saliva collection. During this time, participants were instructed to not eat or drink anything other than water (including no mints/gum), then asked to rinse their mouth out with water 10 min prior to providing the saliva sample. If participants were given a lunch break, or arrived immediately after lunch, the protocol allowed for minimum of 60 min before sampling. Thus, the majority of saliva samples occurred ~ 60 min after a large meal (38,58). Participants and their guardian/parent arrived at the study site for collection based on when the study site and participant schedules aligned. Current guidelines for optimal utilization of salivary bioscience recommend the notation of time of recent meal, oral health or injuries, braces, or recent loss of deciduous teeth (5). However, many of these variables were not controlled or collected in the ABCD Study given considerations for reducing participant burden, and experimentally prioritize the central aims of the ABCD study including multi-modal MRI, comprehensive profiles of adolescent substance use, and mental health assessments.
When present at the study site, a research assistant (RA) documented the arrival time of the participant, presence of parent or guardian, and the time the participant reported waking. After the RA instructed the participant to passively drool into a sample collection tube, the RA then documented the timing of the salivary sample, duration of sample collection, discoloration, or visible imperfections, as well as duration from collection to placement into a -20°C to -80°C freezer. Guardians/parents were compensated for their participation in the ABCD study, with the level of compensation being varied between study sites to account for differences in cost of living (44). Salivary samples were then shipped from study sites on dry ice, confirmed for frozen state upon arrival, and assayed by an external laboratory (59).
To reduce statistical noise within the analytic sample unrelated to sampling methodological variables, we removed participants whose biological sex at birth was not collected (n = 7), reported unable to complete (n = 59), and refused (n = 19) from analyses. We further cross-referenced each participant's biological sex at birth with the biological sex reported at the time of salivary sample collection and removed those with mismatched sex (n = 23). We adopted this decision to cross-reference reported sex at birth with biological sex reported at Baseline collections because early ABCD protocol indicated that a participant's sex at birth would determine which hormone panel (e.g., being inclusive or exclusive of estradiol) would be analyzed at the study visit. Only 2 participants were marked as male at birth but had missing entries at salivary sample collection. Those 2 participants were reclassified as male for analyses. We also reclassified the 4 participants reported as intersex (I) at birth (Figure 1) with the sex reported at salivary sample collection. In addition, participants with a gestational age less than 28 weeks and a reported birthweight less than 1,200 grams were removed from the analytic sample. These participants were erroneously included in the study given that the exclusion criteria required gestational age to be 28 weeks or greater. The final analytic sample consisted of n = 10,567, of which 5,534 were male and 5,033 were female at baseline (Figure 1).
---
Measures
---
Demographic and socioeconomic variables
The inclusion of child age in statistical analyses (in months) was informed by evidence of differential sleep habits, caffeine intake, and physical activity habits between children ages 7 to 10 years old. Sleep habits including sleep duration, which may inform waking time before salivary collection, is significantly associated with child age around 9-and 10-year-olds (e.g., sleep duration decreases as child age increases) (60,61). Further, documented significant declines in physical activity with increases in child age between ages 9 and 15 years (62-64) demonstrates a need to control for child age as a precision variable due to independent relationships with the outcome in these analyses. Regarding caffeine intake, inconsistent relationships in the literature warrant investigation in our analyses. While previous evidence demonstrates general increases in caffeine intake with increases in age, several studies (65) observed lower caffeine intake between 9-and 10-year-olds, while other studies observed similar caffeine intake among 9-10-year-olds (66). Given these existing associations, bivariate relationships were examined between child age and salivary methodological variables. After observing significant bivariate relationships (Table 1), multivariate models were adjusted for child age as a precision variable to isolate effects due to independent relationships between each predictor and the outcomes.
To examine relationships between salivary collection methods with socioeconomic factors, we constructed the following measures.
Poverty status represents the household's socioeconomic position relative to the federal poverty level (FPL). This was indexed according (68)(69)(70). Therefore, Deep Poverty is an important, unique construct of experienced poverty. The participant's guardian/parent self-reported their level of education, and if partnered, also reported the partner's level of education. Household education in our analyses represents the highest level of education in the household reported by the parent. If the parent reported having a partner, then the highest level of education (71,72). Thus, to leverage a single operationalization of household education and to reflect inclusivity in gender-neutral terminology (73), we used the highest level of education in the household reported by the parent. Household marital status was categorized as, "yes, " if the parent reported being married. Otherwise, marital status was categorized as, "no, " if the parent reported being widowed, divorced, separated, never married, or living with partner.
P B <20 >20 P C Yes No P C N (%) N (%) N (%) N (%)
Area deprivation index (ADI) was calculated as the scaled weighted sum of 17 neighborhood-level characteristics within the participant's reported census block group. A detailed list of census variables has been summarized in Kind et al. and adapted for use in ABCD (74,75). This includes proportion of population aged ≥25 years with <9 years of education; proportion of population aged ≥25 years with less than high school diploma; proportion of employed persons age 16+ in a "white collar" occupation; median household income; income disparity; median home value; median gross rent; median monthly mortgage; percent owner-occupied housing; percent of population age 16+ unemployed; percent of families below poverty line; percent of population below 138% of poverty line; percent of single-parent households with children <18 years; percent occupied housing units without vehicle; percent occupied units without telephone; percent occupied units without complete plumbing; percent occupied units with more than 1 person per room (74). Higher ADI scores, and thus upper quartile categorization, refer to higher levels of area deprivation, while lower quartile categorization refers to lower levels of area deprivation. Similar assessments of ADI have been widely applied in pediatric developmental research and support the validity of ADI for predicting child and family well-being (76)(77)(78). Specifically, within the ABCD cohort, many childhood outcomes such as brain structure and function, as well as body mass index, are associated with the ADI measure used in this analysis (79-81).
---
Methodological variables for salivary collection
The following salivary collection variables were analyzed. Time since waking reflects the duration of time from the participant's self-reported time of waking to the start of the salivary sample collection documented by the RA. If a participant's time since waking was calculated to be less than 30 min, greater than 15 h, or was missing, values were assumed to be erroneous data, and therefore were excluded from the analyses (n = 84). Samples with time since waking less than 30 min were removed because due to ABCD protocol, it is highly unlikely that saliva sampling occurred within this time frame. Specifically, after participants arrived at the study site, the research assistant preformed a series of pre-collection assessments, including obtaining consent/assent, explanation of saliva sampling, and conducting demographic and pubertal questionnaires before soliciting a saliva sample (82). Given that the estimated time to complete these steps was at least 30 min, samples documented to be collected within 30 min of waking are likely erroneous.
Collection time of day refers to the time of day the salivary sample collection took place at the local study site laboratory. Collections that were reported before 06:00 a.m. and after 9:00 p.m. were assumed to be erroneous data, and therefore excluded from the analyses (n = 10).
Physical activity was categorized dichotomously, reflecting whether the participant was vigorously physically active (sweating, breathing heavy) for at least 20 min within the 12 h prior to sampling. Participants were classified into less than 20 min of physical activity, or greater than 20 min of physical activity.
Caffeine intake was categorized dichotomously as a yes or no response, referring to whether the participant reported consuming caffeine from drink within the 12 h prior to sampling. We categorized affirmative responses coinciding with reports of non-zero milligrams of caffeine as, "yes, " and denial responses coinciding with reports of zero milligrams of caffeine as, "no, " for these analyses.
---
Statistical analyses
Associations between socioeconomic variables and salivary collection variables were examined through a series of bivariate tests. A Spearman test of correlation (r s ) was performed to examine correlations between ordinally coded socioeconomic variables (Table 2). Given that neither the participant's age in months nor the continuous salivary collection variables were normally distributed, a Spearman test of correlation (r s ) was performed to examine the strength and direction of their relationship (Table 1). A Kruskal-Wallis non-parametric test of equality (H test statistic) was performed to identify differences in continuous salivary collection variables between levels of categorical socioeconomic variables (Table 1). A Chi-square test of independence (X 2 ) was performed to identify associations between categorical salivary collection variables and categorical socioeconomic variables (Table 1).
A series of univariate and multivariate multi-level linear or logistic mixed effects models were performed to examine potential confounding effects among socioeconomic factors determining salivary collection outcomes. To account for clustering effects by study (84), piecewiseSEM (85), lubridate (86), Hmisc (87).
---
Results
---
Descriptive statistics
Within the entire sample, the mean number of hours between participant waking and time of collection was 5.79 h, and the average time of collection was approximately 12 h and 53 min after midnight local time (not pictured). Time since waking and collection time of day were significantly strongly positively correlated (r s = 0.93, p < 0.05). No significant associations were observed between physical activity and caffeine intake (X 2 = 0.25, df = 1, p = 0.61). A descriptive summary of salivary collection methods for the entire analytical sample, according to income group, is presented in Figure 2. Correlations between socioeconomic variables for the entire analytic sample are reflected in Table 2. All socioeconomic variables were significantly correlated with each other, albeit with ranging direction and strengths (p-values < 0.05, Table 2). Household poverty status was strongly positively correlated with household education (r s = 0.63, p < 0.05), yet moderately positively correlated with household marital status (r s = 0.46, p < 0.05). Household education was also moderately positively correlated with household marital status (r s = 0.42, p < 0.05). ADI was negatively correlated with household poverty status (r s = -0.46, p < 0.05), education (r s = -0.39, p < 0.05), and marital status (r s = -0.26, p < 0.05).
Child age (mean ± SD = 118.9 ± 7.5 months) was significantly negatively correlated with time since waking and collection time of day, albeit weakly (ps < 0.05, Table 1). No significant bivariate associations were observed between child age in months and physical activity nor caffeine intake.
Significant bivariate associations were observed between household poverty status and all salivary collection measures, but varying relationships between other SES factors and salivary collection measures. Mean time since waking was significantly different between levels of household poverty status (Table 1; H = 12.4, df = 4, p = 0.01), yet it was not significantly associated with household education (Table 1; H = 5.04, df = 4, p = 0.28). Household marital status was also not significantly associated with time since waking (Table 1; H = 0.08, df = 1, p = 0.78). Regarding bivariate associations at the neighborhoodlevel with ADI, mean time since waking (H = 13.9, df = 3, p = 0.003) was significantly different between quartiles of neighborhood deprivation (Table 1).
Additionally, while mean collection time of day was significantly different between levels of household poverty status (Table 1; H = 25.8, df = 4, p < 0.001) and household education (Table 1; H = 11.6, df = 4, p = 0.02), it was not significantly associated with household marital status (Table 1; H = 3.8, df = 1, p = 0.05) nor ADI (Table 1; H = 6.2, df = 3, p = 0.10).
Lastly, categories of physical activity and caffeine intake were not significantly independent (e.g., reject null hypothesis) of household poverty status, household education, marital status, nor ADI (Table 1). Whether or not a participant engaged in physical activity prior to sampling appeared to be significantly associated with household poverty status (X 2 = 10.8, df = 4, p = 0.03), household education (X 2 = 19.7, df = 4, p < 0.001), marital status (X 2 = 10.3, df = 1, p = 0.001) and ADI (X 2 = 13.3, df = 3, p = 0.004). In addition, caffeine consumption prior to sampling was significantly associated with household poverty status (X 2 = 66.6, df = 4, p < 0.001), household education (X 2 = 124.3, df = 4, p < 0.001), marital status (X 2 = 35.8, df = 1, p < 0.001), and ADI (X 2 = 45.8, df = 3, p < 0.001).
---
Child age
In univariate models, no significant independent relationships were observed between child age in months and time since waking and collection time of day. However, because of significant bivariate associations between child age and these salivary collection methods (Table 1), child age (months) was adjusted for in multivariate models predicting the outcomes described below.
---
Time since waking
Time since waking refers to the timeframe between the participant's waking time and subsequent start of saliva collection. Univariate analyses demonstrated a significant 5.34% longer time since waking among deep poverty households compared to high income households (Table 3; beta = 0.05; p < 0.0125). ADI was not significantly associated with time since waking (Table 3).
---
Multivariate
When adjusting for child age or ADI in multivariate analyses, significant relationships were observed between household poverty status and a longer time since waking (Table 3; Model 1 and Model 2). Deep poverty households demonstrated a significant 2.06% longer time since waking compared to high income households, adjusting for only child age (Table 3; β = 0.02; p < 0.0125). Moreover, when adjusting for both child age and ADI, time since waking was significantly 5.88% longer among deep poverty households compared to high income households (Table 3; β = 0.057; p < 0.0125).
---
Collection time of day
Collection time of day refers to the local time of day of the salivary sample collection. In univariate analyses, deep poverty households significantly demonstrated collection start times 2.43% later in the day compared to high income households (Table 4; β = 0.024; p < 0.0125). No significant differences were observed between marital status, levels
---
Multivariate
In multivariate analyses adjusting for child age, marital status, and household education, significant relationships between household poverty status and collection time of day were maintained (Table 4; Model 3). Collection start times among deep poverty households were 2.41% significantly (marginal) later in the day compared to high income households (Table 4; β = 0.024; p = 0.016). When including ADI in multivariate analyses, marginal significant relationships between household poverty status and collection time of day were still maintained (Table 4; Model 4).
---
Physical activity
Physical activity refers to any rigorous physical activity for 20 or more minutes in the 12 h prior to providing a saliva sample. In univariate analyses, significant increases were observed in the odds of physical activity with decreasing levels of poverty. Deep poverty households demonstrated 42% lower odds of physical activity within 12 h of salivary sampling compared to high income households (Table 5; OR = 0.58, 95% CI [0.44-0.77]; p < 0.0125). Despite a stepwise increase in odds of physical activity with lesser impoverished households, these households were still less likely to engage in physical activity relative to high income households, albeit not significantly.
In univariate analyses, lower levels of household education demonstrated a significantly lower odds of physical activity compared to households with Graduate/Professional educations (Table 5; p < 0.05). To note, univariate relationships between household education (e.g., HS graduate and College Graduate) and physical activity were not significant after Bonferroni correction. There was a pattern of increasing odds of physical activity with higher education levels. Households with a less than HS education demonstrated a 43% reduced odds of physical activity 12 h prior to salivary sampling compared to Graduate/professional households (OR = 0.57, 95% CI [0.41-0.79]; p < 0.0125). Households with a HS graduate, Some College/Associate, or College education demonstrated a respective 26, 27, and 16% reduced odds of physical activity compared to the reference group (Table 5).
ADI was not significantly associated with physical activity in univariate analyses (Table 5).
---
Multivariate
In multivariate analyses adjusting for household socioeconomic factors and ADI, relationships between household poverty status and odds of physical activity became fully attenuated (Table 5; Model 5 and Model 6).
Relationships between household education and odds of physical activity became partially attenuated. Only households with Some college/Associate education demonstrated 21% lower odds of physical activity (OR = 0.79, 95% CI [0.64-0.97]; p < 0.05) within 12 h of salivary sampling compared to households with Graduate/Professional educations (Table 5). This result however is not significant after Bonferroni correction.
Despite univariate non-significance between ADI and physical activity, a marginally significant relationship between ADI and physical activity emerged in multivariate analyses adjusting for household marital status, household poverty status, and household education. An ADI in quartile 2 (e.g., moderately deprived neighborhood) was significantly associated with 1.23 higher odds of physical activity compared to an ADI in quartile 1 (least deprived) (OR = 1.23, 95% CI [1.00-1.50]; p < 0.05). These results are not significant after Bonferroni correction.
---
Caffeine intake
Caffeine intake refers to the child's self-report of any caffeinated beverage during the 12 h prior to providing a saliva sample. In univariate analyses, significantly higher odds of caffeine intake was observed among lower levels of household poverty compared to high income households ( 6).
Lower levels of household education demonstrated a significantly higher odds of caffeine intake compared to households with Graduate/ Professional educations (Table 6; p < 0.0125). Households with a less than HS education demonstrated a 2.88 higher odds of caffeine intake 12 h prior to salivary sampling compared to Graduate/professional households (OR = 2.88, 95% CI [2.07-4.01]; p < 0.0125). There was a pattern of decreasing odds of caffeine intake with higher education levels. Households with a HS graduate or Some College/Associate education demonstrated a respective 2.79, 2.33, 1.50 higher odds of caffeine intake compared to the reference group (Table 6).
ADI was only significantly associated with caffeine intake in univariate analyses (Table 6). Residing in highly deprived neighborhoods (e.g., quartile 3 and 4) was significantly associated with a 1.59-1.89 (p < 0.0125) higher odds of caffeine intake compared to participants residing in the least deprived neighborhoods (quartile 1).
---
Multivariate
In multivariate analyses adjusting for household marital status, education, and ADI, relationships between household poverty status and odds of caffeine intake, as well as ADI and caffeine intake became fully attenuated (Model 7 and Model 8). However, significant relationships between household education and caffeine intake were maintained (Model 7 and Model 8).
---
Discussion
The findings from this study demonstrate significant associations between several key salivary methodological variables (time since waking, collection time of day, physical activity, and caffeine intake) with key socioeconomic factors (poverty status, household education, neighborhood deprivation). In general, lower levels of household poverty and education were significantly associated with salivary collection methodological variables (e.g., longer times since waking, collections later in the day, higher odds of caffeine consumption, and lower odds of physical activity). Furthermore, household socioeconomic context and neighborhood socioeconomic context were differentially associated with these variables. This indicates multiple sources of socioeconomic factors can independently introduce methodological biases when not fully standardized across data collection sites and individual participants. Together, present findings ultimately suggest that analyte levels measured from these samples may be impacted by non-random systematic methodological biases, particularly among analytes sensitive to variability in pH levels (e.g., caffeine in sample), physical activity/exercise, or circadian patterns. Leveraging this large salivary data set will require additional care when leveraging salivary analytes in future examination of early life antecedents of health inequities. Finally, only a subset of key socioeconomic factors and salivary sampling methodological variables were assessed in the present analyses, therefore other factors that drive health inequities may impact additional salivary methodological variables in addition to those examined in this current study.
Household poverty status was consistently significantly associated with salivary methodological variables in univariate analyses, often when comparing highly impoverished households with lesser impoverished households. These relationships were maintained in multivariate analyses when specifically predicting time since waking and collection time of day. Significant relationships between household poverty status and physical activity and caffeine intake were attenuated in multivariate analyses when adjusting for household marital status, household education, or ADI. To our knowledge, no study has examined direct relationships between household poverty status and salivary collection variables among pediatric populations. Our measure of poverty status (e.g., household income as a function of household size) may reflect more proximal measures of material or economic goods that, when scarce in impoverished households, facilitate longer durations between waking and arriving to the laboratory to provide a saliva sample, as well as sampling later in the day. With this, it may be that a reduction in economic goods associated with an impoverished household leads to unique barriers preventing an early arrival to the study site shortly after waking and earlier in the day, thereby performing salivary collections in the "tail" of diurnal rhythms when levels are low. Also, later sampling times among participants from impoverished households may have been partially or fully driven by site-specific differences in access (e.g., differences in travel time and distance). Alternatively, given the semi-flexible experimental design of the cohort study, it is possible households in poverty self-selected for a later study start time over an earlier start time in anticipation of additional barriers, such as prioritizing employment responsibilities, geographical or transportation barriers, or responsibilities of other children without funds for additional childcare. Differential preferences to come into the laboratory on a weekday versus a weekend may be another contributing source to this variability and not investigated in the present analysis. Additionally, attenuated relationships with household poverty status predicting physical activity and caffeine intake after accounting for additional socioeconomic factors, such as household education or ADI, suggest that differences in likelihood of physical activity or caffeine intake may be partially attributed to a complex interaction between several socioeconomic constructs. It is possible that individual measures of SES may be less apt to capture differences compared to composite forms of SES that include income, education, and neighborhood characteristics (53,88). While these are only some explanations, these differences in salivary sampling methodological variables may partially, yet falsely, drive future SES-related health inequities, or null findings, in observed salivary analyte levels that are sensitive to variability in sampling methodological variables.
Household education was not significantly associated with time since waking nor collection time of day but was significantly associated with physical activity and caffeine intake in univariate and multivariate analyses. Again, to our knowledge, no study has examined direct relationships between household education and on-site salivary collection methodological variables among adolescent populations. Even with this, Krieger et al. reported weak associations between education level and physical health status however only among those living below the poverty line (89). While this study was performed among adults and examined health status, this partially supports our non-significant findings between household education and time since waking or collection time of day. In addition, relationships in our study between household education and physical activity were only significant when comparing households with Some College/Associate education to households with a Graduate/Professional education and adjusting for household poverty status. These findings are also in line with those of Krieger et al. where level of education operates on health differentially by poverty status (89). Nonetheless, this evidence may explain why household education was sparsely related to salivary collection variables. The inclusion of both household education and household poverty status in the same statistical models potentiates confounding, given evidence of strong positive correlations between one's education level and income (88). However, we checked variance inflation factor (VIF) values for these models, and all were below 2.09, indicating that these variables were not redundant in predicting the outcomes in this study sample.
When examining neighborhood socioeconomic contexts, significant relationships were observed with ADI when predicting multivariate odds of physical activity and univariate odds of caffeine intake, whereas ADI was not significantly associated with time since waking nor collection time of day. Cerin et al. demonstrated complex relationships between environmental factors and individual-level or household-level factors (e.g., household income and education) that impact participation in physical activity (90). Differences in performing moderate to vigorous physical activity due to area-level socioeconomic factors were significantly mediated by several individual-level factors (e.g., social support from friends and selfefficacy), but not significantly mediated by infrastructure nor arealevel crime (90). While ADI is a well-validated measure of neighborhood-level socioeconomic context, there are other ways to assess this construct beyond the current version (91,92) that may miss key characteristics that are important for understanding childhood origins of health inequities. The measure of ADI used in this study is a composite of multiple forms of area economic and resource deprivation. This indicates that relationships between area-level SES and physical activity may be partially explained by individual-level factors not recorded as part of this study. While limited in the ability to inform individual-level patterns (e.g., due to ecological fallacy), this ADI measure includes factors of basic resources (e.g., plumbing, telephone) that would not be captured by income and education alone.
---
Strengths and limitations
Despite evidence for potential non-random systematic bias in salivary sampling methodological variables in the present cohort study, several strengths of the study design were observed. First, the ABCD Study© achieved coordination among 21 sites for the successful self-collection of saliva among a large pediatric cohort repeated annually. This strength adds to both the salience of the observed findings in this nationally representative pediatric study sample and further highlights the utility of salivary bioscience research on large scales and with pediatric populations. Second, the cohort sample of children was successfully recruited from the general population, rather than a convenience sample among those presenting to a clinical site, thus adding to the heterogeneity of the cohort sample, and thereby increasing the external validity of the present findings for future large-scale salivary collections. Additionally, uncovering socioecological relationships using data obtained in a non-invasive way means that salivary biosciences are well-suited to understand public health issues, particularly among children from families underrepresented in research (93). Salivary methodological variables examined in this study are often applicable to other forms of biological sample collection measuring acutely fluctuating levels (e.g., blood, urine) for analytes that vary across time of day yet correlate with salivary levels (94)(95)(96)(97)(98). Thus, our results may have increased generalizability beyond saliva in this study and may occur in other large biomedical research studies. Other biological methods measuring chronic levels would not be impacted by these methodological variations (e.g., hair, nails, teeth).
Nonetheless, there are several limitations to the current analyses. First, part of the exclusion criteria for the current analytical sample was a mismatch between parental report of "biological sex at birth" and the participant endorsed a binary "biological sex/gender" at the time of saliva collection from baseline (e.g., current analyses). Unfortunately, given that sex at birth determined the hormone panel for testing prior to Year 3, this protocol misrepresents associations between estradiol and variants of male sex or gender expression by not assaying saliva samples for the assumed "female" hormone. This experimental strategy potentially excludes important dynamics in gender identification throughout pubertal maturation (99) and may limit our ability to fully understand how hormones emerge across a diversity of gender identities in the current data set. In year 3, ABCD protocol solicited the participant's endorsement of any gender identity at saliva collection however this is not part of the Release 3.0 dataset used in these analyses. Additional gender identity specific assessments were added to the study at this Year 3 timepoint as well. After Year 3, biological males at birth endorsing a male gender identity were assessed for testosterone and DHEA only, and all other possible combinations of gender identity endorsement (including neither gender) were assessed for testosterone, DHEA, and estradiol. Future analyses using the ABCD dataset for year 3 and later should leverage the gender identity data that better capture the dynamics of gender identification with salivary hormones. Second, there are many ways to capture socioeconomic status (SES), including measures of employment or unemployment status, wealth, type or status of occupation, or numeric income level (100). The variables used in this study are mostly reflective of household economic resources and household education. Previous evidence indicates that education and poverty status represent just two of many overlapping yet distinct dimensions comprising SES, rather than being entirely reflective of SES (53). Given that SES is a dynamic, multi-dimensional construct, the exclusion of other aspects of SES may only provide a partial understanding of socioecological relationships on salivary collection methodological variables.
Another limitation is the relative difference in smaller sample size among the deep poverty and poverty groups compared to the highincome group, given that larger sample sizes are more statistically powered to detect small effect sizes. Thus, imbalances in sample sizes can bias the findings of smaller effect sizes between groups, especially where the comparison group (e.g., deep poverty or poverty) is a smaller sample size relative to the reference group (e.g., high income). The deep poverty and poverty groups are likely underpowered to detect small effects and are the most at risk for null findings. Null findings between deep poverty and poverty with salivary methodological variables in the present study should be interpreted with caution. However, the deep poverty and poverty sample size were n = 798 and n = 616, respectively, which is relatively robust for pediatric biomedical research. In addition, for many of the observed findings, the effect sizes of the significant results in this analysis are relatively moderate to small. These results may not be observable within studies with smaller sample sizes, as sample sizes may be underpowered to detect small effect sizes. Without being contextualized to specific analytes of interest, the practical application of current findings is limited.
In addition, an area-level measure such as ADI is subject to an ecological fallacy because aggregate-level patterns may not actually reflect individual-level socioeconomic measures (53,101). Although we leverage multi-level models accounting for participant clustering by study site, we observed different relationships to salivary collection methodological variables between household income/education and ADI. One potential explanation as to why ADI was not related to time-dependent salivary collection variables is that ADI may not be as proximal to household level factors, and thus would not reflect direct relationships to time since waking or collection time of day.
Another limitation of this analyses is the focus on salivary bioscience methodological variables only. The observed relationships discovered in the present analyses were not examined further in relation to specific salivary analytes that have been assayed in the samples (e.g., DHEA, testosterone, estradiol). Associations of socioeconomic-based differences in salivary collection methodological variables with salivary analyte levels were not directly tested in the present analysis. Further, our examination of baseline relationships may also limit interpretability over time, especially with longitudinal changes in SES for a participant, changes in salivary methodological variables (e.g., sampling at different time of the day or different physical activity/caffeine intake habits as participants age), and even longitudinal changes in analyte levels. Given the breadth of research questions and corresponding analytical approaches with this dataset, associations between methodological biases and analyte levels could vary across independent and longitudinal investigations. There are important considerations for whether these relationships are stable over time. We rely on existing literature that points to interference of accurate analyte measurement due to collection methodological variables (11, 29, 30, 35-37, 39, 40). Rather, this analysis encourages researchers examining health inequities to conduct a thorough examination of salivary collection methods prior to leveraging analyte levels.
While we observed significant relationships between socioeconomic and salivary sampling methodological variables, we cannot make conclusions about magnitude and directionality of relationships to specific analytes. Based on previous literature of neuroendocrine circadian patterns (30)(31)(32)(33)(34), we predict that these differences in salivary sampling methodological variables will become more problematic as participants continue to mature, as circadian patterns become more pronounced with maturation, and differences in exercise and caffeine intake may grow with age as a function of key socioeconomic factors. However, not all salivary analytes demonstrate a circadian rhythm or are sensitive to changes in pH of the sample, or physical activity. Thus, some specific analytes may be relatively unaffected by the observations discovered in these analyses. Researchers should evaluate whether their salivary analytes of interest reflect the observed patterns in their own analyses, and if so, intentionally address them in analyses and interpretation of salivary analyte results.
Examination of socioeconomic factors with other salivary sampling methodological variables that were collected in ABCD were out of scope for the current analyses, including cotinine levels from first and/or second-hand tobacco exposure in the children and medications that may alter salivary flow rates. However, future studies of bio-banked salivary samples could measure cotinine directly from the sample to statistically control for these confounders. There are additional salivary sampling methodological considerations that were not fully collected in this large data set, such as participant reported factors of the oral environment (e.g., blood from sores, lost teeth, injury). Research assistants used a 5-point scale to document visible alterations in the saliva sample, including presence of discoloration from food dye or blood, and food particles. A visual inspection of the salivary sample was also conducted by professional laboratory staff during the time of assaying (e.g., Salimetrics, Carlsbad, CA, United States) to note any abnormalities in the sample. Future studies would benefit from a thorough oral health questionnaire at the time of sample collection to account for salivary sample contamination. To achieve the most rigorous use of salivary analytes, all of these methodological factors should be controlled for either through upfront experimental design in future studies, expansion of questionnaires on oral health, or with careful and intentional statistical analyses to fully understand how socioeconomic factors may drive experimental noise and interfere with results. This includes maintaining strict protocols for saliva sampling regarding time since waking, time of day, sample collection duration, abstaining from caffeine, smoking, and rigorous physical activity 12 h prior to sampling.
Lastly, the examination of race/ethnicity differences was outside of the scope of this analysis; however, we encourage investigators to consider intentionally integrating upstream measures when investigating research questions pertaining to racial and ethnic minoritized groups (102)(103)(104). For example, structural racism has been identified as an important factor of adverse health among racial and ethnic minoritized groups including adolescents (105)(106)(107). Future salivary bioscience research studies must acknowledge root causes of racial/ethnic differences in health, and should be integrated in salivary bioscience research when examining race/ethnicity particularly through collaboration with experts in structural racism.
---
Conclusion
Significant associations were observed between socioeconomic factors and salivary collection methodological variables. Specifically, lower levels of household poverty and education were significantly associated with more sources of potential bias in salivary collection methodological variables (e.g., longer times since waking, collections later in the day, higher odds of caffeine consumption, and lower odds of physical activity). These novel findings serve as a thorough cautionary tale for future analyses leveraging analyte levels from these salivary samples to examine early antecedents of health inequities, as results may reflect variations in methodological variables of salivary collections (e.g., time since waking to sampling, time of day of sampling, physical activity, and caffeine intake) and not actual biological mechanisms. Entangled contributions to biological functioning from socioeconomic factors remain a potential source of non-random systematic biases. Conclusions made about biological functioning using saliva while only accounting for salivary collection methodological variables, without the consideration of socioeconomic factors, may erroneously attribute group differences to differences in biological functioning rather than the broader upstream socioeconomic environment.
Frontiers in Public Health 16 frontiersin.org
These results advance salivary bioscience research by applying a health equity perspective in considering socioeconomic factors on standardizing salivary methodology. These findings highlight the importance of developing an experimental design that standardizes salivary collections, to prevent potential unintentional non-random systematic biases in saliva sampling methodology. Specifically, our results suggest that future studies ensure participants self-collect at the same time of day, for the same collection duration, and in the absence of rigorous physical activity or caffeine consumption 12 h prior to providing a sample. If stringent sample collection protocols are not feasible, we recommend that future studies collect information on potentially important salivary methodological variables (e.g., time since waking, collection time of day, physical activity, caffeine intake, oral health, medications), utilize post-hoc statistical techniques (e.g., adjustment) to cautiously disentangle effects, and target analytes that are robust to variability in salivary methodological variables. Nonetheless, salivary samples were collected effectively in participants across 21 sites, demonstrating feasibility of guided self-sampling as a non-invasive biological specimen in a large-scale pediatric study. These samples have strong potential to be leveraged in investigations of biological mechanisms across the entire sample, yet more cautiously when leveraging factors in analyses that drive health inequities.
---
Data availability statement
Publicly available datasets were analyzed in this study. This data can be found: in the NIMH Data Archive (NDA), Adolescent Brain Cognitive Development (ABCD) Study (http://dx.doi. org/10.15154/1519007).
---
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
---
Author contributions
HM and KU contributed to the design of the study. HM led analyses and writing. KU contributed to analyses and writing. All authors contributed to the article and approved the submitted version.
---
Funding
Support for the preparation of this manuscript was provided by the National Institute on Alcohol Abuse and Alcoholism (award no. K01AA026889 to PI: KU) including GSR supported for HM, Pre-doctoral Candidate. Data used in the preparation of this article were obtained from the Adolescent Brain Cognitive Development (ABCD) Study (https://abcdstudy.org), held in the NIMH Data Archive (NDA). This is a multisite, longitudinal study designed to recruit more than 10,000 children aged 9-10 and follow them over 10 years into early adulthood. The ABCD Study ® was supported by the National Institutes of Health and additional federal partners under award numbers U01DA041048, U01DA050989, Ue01DA051016, U01DA041022e, U01DA051018, U01DA051037, U01DA050987, U01DA041174, U01DA041106, U01DA041117, U01DA041028, U01DA041134, U01DA050988, U01DA051039, U01DA041156, U01DA041025, U01DA041120, U01DA051038, U01DA041148, U01DA041093, U01DA041089, U24DA041123, U24DA041147. A full list of supporters is available at https://abcdstudy.org/federal-partners. html. A listing of participating sites and a complete listing of the study investigators can be found at https://abcdstudy.org/consortium_ members/. ABCD consortium investigators designed and implemented the study and/or provided data but did not necessarily participate in the analysis or writing of this report. This manuscript reflects the views of the authors and may not reflect the opinions or views of the NIH or ABCD consortium investigators. The ABCD data repository grows and changes over time. The ABCD data used in this report came from http://dx.doi.org/10.15154/1519007. DOIs can be found at https://nda.nih.gov/study.html?id=901.
---
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The handling editor MG declared past co-authorships with the author KU.
---
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. |
Many factors influence the incidence of type 2 diabetes mellitus (T2DM). Here, we investigated the associations between socio-demographic characteristics and familial history with the 5-year incidence of T2DM in a family-based study conducted in Brazil. T2DM was defined as baseline fasting blood glucose � 126 mg/dL or the use of any hypoglycaemic drug. We excluded individuals with T2DM at baseline or if they did not attend two examination cycles. After exclusions, we evaluated a sample of 1,125 participants, part of the Baependi Heart Study (BHS). Mixed-effects logistic regression models were used to assess T2DM incident given different characteristics. At the 5-year follow-up, the incidence of T2DM was 6.7% (7.2% men and 6.3% women). After adjusting for age, sex, and education status, the model that combined marital and occupation status, skin color, and familial history of T2DM provided the best prediction for T2DM incidence. Only marital status was independently associated with T2DM incidence. Individuals that remained married, despite having significantly increased their weight, were significantly less likely to develop diabetes than their divorced counterparts. | Introduction
Type 2 diabetes mellitus (T2DM) is a multifactorial metabolic disease characterized by the development of insulin resistance and, subsequently, the loss of b-cell function. The worldwide prevalence was 30 million 70 years ago and 108 million 35 years ago [1,2]. It is known that T2DM is rising faster in low-income and middle-income than in high-income countries [1,2]. Brazil has risen from seventh in 1980 to fourth in 2014 in the worldwide country rank of diabetes prevalence (from 2.7 to 11.7 million adults with diabetes) [1].
There are important limitations in generalizing determinants of T2DM incidence from different populations [1][2][3][4][5][6][7]. This fact is partially explained by differences in obesity rates, lifestyle, health system resources, and access to medications for preventing the disease [3,8]. The association between marital status and various diseases has been investigated. Especially for T2DM, while some results have highlighted the beneficial effect of marriage [9][10][11], a poor marital quality may be a unique risk factor in men [12] or being widowed has been associated with a lower risk in women [13]. Moreover, marriage patterns have changed in the last years: people get married later and less often than in the past, there are more divorces and gender roles in a marriage have changed [14], all of which could modify these relationships.
This study aimed to identify the relative importance of socio-demographic variables, in particular marriage status, associated with T2DM incidence in a Brazilian sample from a rural area, after a 5-year follow-up period.
---
Materials and methods
---
Study population
The Baependi Heart Study (BHS) is a Brazilian cohort that seeks to investigate cardiovascular risk factors and other non-communicable diseases, including both genders aged 18 years old or above. At baseline (cycle 1 from 2005 to 2006), 1,695 individuals in 95 families were recruited in Baependi (19,117 inhabitants, 752 km 2 ) located in Minas Gerais State, Brazil [15]. Five years later (cycle 2 from 2010 to 2013), 2,495 individuals from 125 families were evaluated [16]. At each examination cycle, socio-demographic, behaviour, medical history, and physical characteristics were assessed by a standardized protocol. A trained staff collected socioeconomic and clinical data, and all participants were examined in the same research center [15,16].
Of those 2,495 individuals at cycle 2, 1,341 individuals were the same assessed at cycle 1; thus, 354 participants were lost during the follow-up period or died, and 800 were new participants assessed only at cycle 2.
For this study, we carried out the analysis in individuals who attended both examination cycles (n = 1,341). Participants who had some missing data (n = 84, cycle 1; n = 45, cycle 2) were excluded. Individuals with fasting blood glucose � 126 mg/dL or individuals that used hypoglycaemic medications in cycle 1 (n = 87) were also excluded. After exclusions, data on 1,125 diabetes-free individuals in cycle 1 were used to access T2DM incident in cycle 2.
The study protocol was approved by the ethics committee of the Hospital das Clı ´nicas (SDC: 3485/10/074), University of São Paulo, Brazil, and each individual provided informed written consent before participation.
---
Sample characteristics
Socio-demographic characteristics included education, marital and occupation status, income, and skin color/race. Those were assessed via interviews using a standardized questionnaire. Education status included four categories: 1) illiterate or never attended school despite reading and writing or attended school for 1 to 4 years; 2) attended school for 5 to 8 years (incomplete or completed primary schooling); 3) attended school for 9 to 11 (incomplete or completed secondary schooling); 4) attended school for more than 11 years or finished university. For analysis, we grouped education into low (categories 1 and 2) or high (categories 3 and 4) levels.
Marital status was defined as 1) married, 2) single, and 3) divorced/widower. Occupation status was categorized as 1) employed or retired and 2) unemployed. Since income was very homogeneous in this sample (about 80% of the sample were in the same range of 250-500 dollars/month), we only included occupation status in our analysis. Skin color/race was selfreported (white, brown, black, and indigenous) and stratified into white and non-white for the current analysis.
Social behaviour was also assessed. Smoking status was dichotomized into current/former smokers or never smokers. Alcohol consumption was defined as never drinkers versus current or former drinkers.
---
Clinical and laboratorial characteristics
Body mass index (BMI) was calculated as body weight (kg) divided by height squared (m 2 ). BMI was categorised as normal weight (< 25kg/m 2 ), overweight (25 kg/m 2 to 29.9 kg/m 2 ) and obesity (� 30 kg/m 2 ). Waist circumference was measured half-way between the lowest rib and the iliac crest while the subject was at minimal respiration. Blood pressure (BP) was measured using a standard digital sphygmomanometer (OMRON, model HEM-741CINT) on the leftarm after 5 minutes of rest in the sitting position. Systolic blood pressure (SBP) and diastolic blood pressure (DBP) were calculated from the mean value of three readings.
Hypertension status was defined by the presence of SBP � 140mmHg or DBP � 90mmHg or by the use of antihypertensive medications. Dyslipidaemia treatment was defined by the use of lipid-lowering drugs. Diabetes mellitus was defined as fasting blood glucose � 126 mg/dL or use of hypoglycaemic medications.
Blood collection was standardized, and laboratory assays were conducted in the same clinical chemistry laboratory. The fasting status was declared by the participants at the time of blood collection and the duration of 12 hours was requested.
---
Statistical analysis
The incidence of T2DM was assessed after a 5-year follow-up of individuals free of the disease at baseline (n = 1,125 participants).
For the descriptive analysis, categorical variables are presented as percentages and only age is summarised as the mean ± standard deviation (SD). The comparisons of categorical covariates were performed by the Chi-square test, and the means (age) were compared by the Student's t-test.
Mixed-effects logistic regression models were used to assess the incidence of T2DM adjusting for different characteristics and family (as a cluster variable). The choice to use logistic regression models instead of Cox proportional hazards models was based on the fact that our study included only two visits with the same time interval for all participants [17]. All analyses were corrected for age and sex. Exploratory analyses-sensitivity analyses and a model for diabetes incidence adjusted for BMI change, were conducted post-hoc after identification of marital status as the main socio-demographic predictor for diabetes incidence, in order to search for changes in marital status during a 5-year follow-up and the interaction between sex and BMI changes. All analyses were performed using R version 3.4.2.
---
Results
---
General characteristics in the Baependi Heart Study
From Table 1, we can see that 57% of participants were women. Approximately 76% of all individuals reported themselves as white and 30% had a familial history of T2DM. More men than women were single (31% vs 25%), smoker (20% vs 12%) and had an occupation with own income (87% vs 57%). At baseline, the mean age was similar for both sexes (41 ± 15 years for women and 43 ± 17 years for men).
Obesity and altered waist circumference increased over time in both sexes. Dyslipidaemia medication use increased approximately 3-fold in both sexes, and hypertension medication use increased by almost 100% in men (15% to 28%).
---
Type 2 diabetes mellitus status according to socio-demographic characteristics
The incidence of T2DM was 6.7% in the general sample (75 T2DM in 1125 participants) over 5 years, and there was no significant difference based on sex (7.2% for men and 6.3% for women) (p = 0.63). Based on age groups, the T2DM incidence was 6% (< 29 years); 5.1% (30 to 39 years); 6.4% (40 to 49 years); 6.8% (50 to 59 years); 14% (60 to 69 years); 7% (> 70 years). In the BHS sample, the rate of undiagnosed cases was 30%.
The incidence of T2DM was also analysed according to socio-demographic variables (Table 2). T2DM was more frequent in individuals with high education status, divorced or widower.
The only socio-demographic variable independently associated with increased odds of presenting diabetes was marital status (Table 3). In our sample, 13% of divorced, 6% of married and 6% of single individuals developed T2DM. After adjusting these estimates for age and sex, being married was associated with a 0.39 odds of developing diabetes; being single was associated with an odds of 0.33 of developing diabetes.
Further investigating this relationship, at baseline, there was no difference in the glucose levels between these three marital status groups, as well as between married and divorced/widower regarding BMI (p value = 0.86), nor between single and divorced/widower (p value = 0.12). In addition, adding baseline BMI to a model predicting diabetes incidence did not significantly change the estimated effect size of marital status suggesting that the observed association is not being mediated by baseline BMI (for being married odds changed from 0.39 to 0.38; for being single odds remained in 0.33). Nonetheless, the observed association could be mediated by changes in the marital status between baseline and 5-years follow-up. Comparing baseline and 5-years marital status, 63% of individuals remained in their baseline marital status. From the 37% that changed their marital status, the majority of changes occurred in single individuals that have married in the last 5 years (38% of those who changed marital status).
A sensitivity analysis using only individuals that remained in their baseline marital status showed that the estimated effect sizes for being married and being single did not change.
Another analysis, based on a model for diabetes incidence, adjusted for BMI change in addition to all previous potential confounders, showed that the BMI change was highly associated with increased odds of developing diabetes (p value = 0.01), however, its addition did not change the estimated effect size of baseline marital status (OR of 0.39 for being married and 0.31 for being single). Only those that married (p value = 0.0001) or remained married (p value = 0.01) presented significant changes towards increased BMI after 5 years. Despite the increased weight gain, individuals from these groups were still significantly less likely to develop diabetes than divorced/widower individuals (Fig 1)
.
---
Discussion
This is one of the first studies describing T2DM incidence in a sample from a rural city in Brazil. The associations between socioeconomic factors and T2DM occurrence were investigated, and it was possible to identify, among sociodemographic variables, the independent effect of marital status on T2DM incidence. Baependi is a small city, whose economy is based on family farming. The role of women is predominantly linked to family care. The general characteristic of this sample reflects a typical rural Brazilian population: men had a higher frequency of being single, smoking and a higher rate of employment. It is known that smoking rates are similar between women and men in high-income countries, but the sex difference increases as the country's income becomes lower [18], as demonstrated in our sample.
We observed a higher prevalence of overweight/obesity among women at baseline, followed by a marked increase in men over the next five years. This seems to have an important relationship with the lifestyle of the population, where in general, women execute the more sedentary activities compared to men, leading to the early onset of overweight/obesity in women. In contrast, men showed a marked decline in their metabolic health later, possibly when they approached the age to retire from rural work. Although this observation is somewhat predictable, it shows the particularities that should be considered in this kind of study, and it may indicate practices for obesity and hypertension prevention, aimed to specific sex and age ranges, that will be more effective since our results are quite different from others who investigated these relationships in urban populations.
Regarding diabetes incidence, in a cross-sectional study, Iser and collaborators found a 6.3% prevalence of self-reported diabetes (5.9% men vs 6.6% women) for the combined https://doi.org/10.1371/journal.pone.0236869.g001 population of capitals of Brazil [19]. However, to the best of our knowledge, incidence data for T2DM is still missing for the Brazilian rural population.
The Framingham Heart Study examined T2DM incidence over 8 years within three distinct periods [20]. The age-adjusted 8-year incidence rate of diabetes was higher among men in the 1970s (3.4% vs 2.6%), 1980s (3.6% vs 3.0%) and 1990s (5.8% vs 3.7%) [16]. In the BHS sample, although, there was no statistical difference, the incidence rate of diabetes was also higher in men (7.2% vs 6.3%).
Previous studies have assessed the association between T2DM and socioeconomical factors in the Brazilian population. In an expressive Brazilian sample, in which the prevalence of selfreported T2DM was 7.5%, after adjustments, diabetes remained associated with age (� 40 years), education (< 8 years of study), marital status (non-married), obesity, sedentary lifestyle and comorbidity, such as hypertension and hypercholesterolemia [21]. In a specific Brazilian sample, assessed to verify the low adherence to anti-diabetic treatment, including only diabetic patients aged over 20 years, age, female sex and lower income status were associated to T2DM [22]. Other findings have shown that some Brazilian States with greater poverty and lower levels of education had higher rates of T2DM or hyperglycaemia as well [22]. However, these were all based on prevalent cases and mostly self-reported. Here, we add data on predictors for the T2DM incidence rate.
In addition to well-known risk factors for diabetes, such as diet and physical activity, the socioeconomic and sociodemographic factors have shown great importance in this context. The socioeconomic position-measured by educational levels, occupation or income is frequently inversely associated with diabetes [23,24]. Smoking, especially for people with low socioeconomic status, was also identified as a mediator for diabetes development [25]. Although these factors were investigated in our study, the only socio-demographic factor that seemed to have greater importance in predicting the 5-year T2DM incidence in the Baependi population was marital status.
The relationship between marriage and improved health outcome has been previously suggested [26]. Some studies have shown a lower incidence of diabetes [27] and improved adherence to diabetes treatment [28] in partnered patients, since marital relationship influences health behaviours and socioeconomic status. Despite there being no difference in the incidence of T2DM between men and women in our study, the influence of marital status on T2DM seems to be modulated by gender. In a recent study which investigated the diabetes mortality in a large Spanish sample, the highest mortality was observed in divorced/widower women, while single men showed highest mortality [29]. Considering the T2DM incidence, another study found that widowed women compared to married women showed lower risk of T2DM development [13]. In our study, the influence of marital status seemed to be independent of sex.
Our results suggest that, only those who remained married or married during the 5-years follow-up have had a significant weight gain, which was associated with an increased risk of developing T2DM. However, the risk associated to marital status did not change, even after this adjustment. In fact, individuals that remained married, despite having significantly increased their weight, were significantly less likely to develop diabetes than their divorced counterparts.
There are two primary theories that can explain the beneficial effect of marriage on health. The first one is regarding the "selection": healthier individuals tend to get married and remain married. The second hypothesis corresponds to post-marriage effect: reduction of stress, adoption of healthy behaviours [30][31][32][33]. In our study, it is not possible to verify which hypothesis was more coherent, however, probably, both have had an effect on DMT2 development.
In this context, Cornelis and collaborators conducted an important study with a large number of men for �22 years and, after various models of adjustment, including lifestyle, BMI, family history, and other variables, widowhood was associated with an increased risk for T2DM in a robust way [34]. In this study, widower and divorced/separated were analysed separately, which is important, as the widowhood and divorce can have different stressful effects [35]. It was reported that the alcohol consumption increased between men who became widower, while both widower and divorced/separated men showed decreased in their BMI and vegetables consumption [35]. Since these factors have influence on T2DM, more studies are necessary to clarify the possible differences regarding the relationship between marital status and T2DM risk. Some limitations were important in our study context. The lack of adjustment for physical activity as the potential/residual confounding factor is one of them. Additionally, classification of occupation can be mentioned, which may not have been effective in distinguishing participants (lack of distinction for domestic and part-time employees, for example), as well as the marital status, since it was not possible to distinguish widower and divorced/separated individuals.
---
Conclusions
In summary, lifestyle influences sex-specific metabolic changes over time. Marital status appears to be a predictor of T2DM incidence and the underlying factors for this association should be further characterised for they may provide important information in the better design and implementation of preventive programs.
---
The data cannot be shared publicly because it contains potentially identifying and sensitive patient information. The ethics committee does not provide the availability of data for the public, even if in an unidentified manner. The study protocol was approved by the ethics committee of the Hospital das Clı ´nicas (SDC: 3485/10/074), University of São Paulo, Brazil. Data access inquiries may be sent to [email protected].
---
Author Contributions
Conceptualization: Camila Maciel de Oliveira, Andrea Roseli Vanc ¸an Russo Horimoto, Rafael de Oliveira |
The principal pursuit of Islamic Education is to produce exceptional and faithful Muslims. Nevertheless, today's mass media often demonstrates various issues involving teenagers and school students in social issues that harm society, which is worrisome. The social environment's influence significantly impacts the younger generation, especially teenagers and students. Education is the ground of individual development and advancement and the primary platform for shaping the personality of noble morals. Morality and character are paramount in Islam and are the essence of understanding and practising Islamic teachings. Therefore, several social environment factors influence the development of students' morals, self-identity and personality. Social environment factors can nurture and decide students' morals if students can adapt well. Hence, this article intended to discuss the social environment that affects and influences the formation of students' morals and the issues and challenges faced. This study employed the literature review methodology by analysing and discussing the content of the text thematically to identify the elements that impact the development of morals and the issues and challenges to form the student's self-identity. Ultimately, this article concluded that the social environment was identified as having a substantial impact on developing students' morals consisting of parents, peers, teachers, schools, residential settings and the mass media. One must prioritise the roles and responsibilities of parties related to each other to determine student morals. This article is relevant for teachers, parents, schools and interconnected parties to apply good values so that the country can produce a young generation with noble character and deliver awareness to avoid misconduct among this young generation to form individuals who are positivethinkers for their future well-being. | Introduction
Education is one of the mechanisms to communicate the Islamic religion and the dakwah process to the community towards goodness. Education based on Quranic values has delivered a civil, moral generation competent in eclectic worldly sciences (Anggraheni & Astuti, 2020). In Islam, the definition of education is exhaustive and integrated, encompassing the whole way of human life that incorporates aspects such as faith -the central pillar of Islam, sharia (law), worship, morals, science and technology, sociology, and economics. It encompasses life in this world and the hereafter in one complete discipline. (Ali & Ab. Razak, 2012;Zakaria et al., 2012;Rosnani, 2015).
In addition to the education system that constitutes the character and self-identity of students, the social environment's help and support can also boost students to achieve success in the world and the hereafter and avoid social issues (Muhamad et al., 2012). Meanwhile, Kamarudin (2015) commented that formal or informal education primarily impacts the formation of morals and character. The support of the social climate always influences the formation of morals and personalities of students. Therefore, students need to be given appropriate education to establish great morals and become future human capital for the country.
Nonetheless, social issues among students are alarming. Various parties are worried about the issues because, in the long term, it will harm future generations. When moral values are declining, it yields social problems. The system will produce students with a powerful vision and self-identity through comprehensive education, including considering the social environment factors. Hence, the social climate is imperative that needs to be paid attention to in the process of forming a student's self-identity. Ultimately, this article endeavoured to discuss the social environment for the development of morals and personality of students as well as the future issues and challenges faced.
---
Education and Morals
Education plays a vital role in maintaining the well-being of individual and community life. Education generally means preserving and growing human beings from physical, mental, language, behavioural, social and religious facets. Attas (1980) highlighted that education is a process of inculcating manners into a person. Jasmi and Tamuri (2007) maintained that education is a process of nurturing and educating, defending, training, purifying, controlling passions, following the leader's instructions, leading, adding, gathering, enhancing, constituting obedience to Allah S.W.T, forming decency, polite and civilised attitude, attending rules, replacing, and eliminating reprehensible traits to praiseworthy conditions, developing a learning attitude in addition to getting used to the process of teaching and learning something new. Busyairi (1997), explained in the context of personality, education determines the personality of a Muslim. According to Ghazali, education is a measure to eliminate bad character and instil good character. Zantany (1984), also underlined, tarbiyyah ruhiyyah (spiritual) is part of an important aspect of Islamic education alongside tarbiyyah jismiyyah (physical), tarbiyyah 'aqliyyah (intellectual), tarbiyyah wijdaniyyah (emotional), tarbiyyah khuluqiyyah (moral) and tarbiyyah ijtima'iyyah (social). Ulwan (2000), also submitted that personality consists of beliefs, worship, morals and appearance.
In the Islamic religion, education aspires to form a well-rounded human being, that is, a human being with intellectual and spiritual intelligence (Arifin, 2014). In addition to morals as an indicator in the personal formation of noble morals, it also relates to worship. Allah SWT said in the Qur'an in Surah Az-Zaariyat verse 56: "I did not create jinn and humans except to worship Me."
From the objective element, education in Islam desires to foster good human beings with exemplary morals, to generate balanced human capital, and to form and develop outstanding morals and identity sourced from Al-Quran and As-Sunnah.
Based on the views of leading scholars, one can conclude that personality reflects character, behaviour or morals from the formation of human psychology that makes up a person's personality or morals, which one can reckon through four leading elements: belief, worship, morals and appearance that distinguishes one person from another.
---
Methodology
Research methodology is a critical element in research high quality results from the analysis that uses the suitable research methodology. The study's results also depend on the methodology of the analysis. Data collection in this research employed a qualitative study designed based on a literature review. Creswell (2005) maintained that qualitative research could help researchers comprehend the process and forms of practice more deeply.
A literature review is attended by reading several books, journals, and other publications related to the research topic, to produce a piece of writing related to a specific topic or issue (Marzali, 2016). In this study, the researchers collected primary and secondary sources from written materials such as books, articles, journals and theses. The researchers made a text content analysis from the following document sources related to the social environment factors, the formation of morals, issues and challenges associated with the development of morals and the students' personalities. The researchers operated a semantic analysis to identify the elements of the social environment than were arranged according to the themes discussed. The researchers employed this approach to discover data based on writing and written documentation. Data collection from primary sources and analysis of text content from documents can provide applicable information to the issues and problems studied.
---
Issues of Student Social Problems
The declining morals happening now among students, whether at the school level or Institutions of Higher Education (IHE), prompts various speculations and questions to the public. One of the valuable assets and driving agents of the country's progress in the future is the youth, which is entirely made up of students. It establishes that the role and contribution of students are expected to ensure the country's steady development. Nonetheless, the question of moral decay is becoming more prevalent from time to time. The teenage social issue is often discussed nowadays and is becoming increasingly worrying.
This social issue leads to discipline problems in schools that interfere with the establishment of sound morals and character of students (Azyyati, 2017). Othman et al., (2015) stated that the moral decay and social issues that are getting worse among school students in Malaysia deliver an impact and effect on the effectiveness of the education system in fostering a young generation that has good manners, morals and holds and appreciates good values in everyday life. The issue of moral decay must be related to forces such as oneself, family, peers, western power, surrounding society, and mass media (Ibrahim et al., 2012;Zainudin & Norazmah, 2011;Norina et al., 2013). This problem is also a result of the present Modernisation and cultural shock that has impacted the youth, which harms individuals and concerns family institutions, society and the country (Zainudin & Norazmah, 2011).
The issue of the moral decay of students implies a substantial threat to achieving the progress and growth of the country. Jaafar and Tamuri (2013), asserted that the failure of an institution, Organisation, nation, country or Civilisation is caused by the individual personality factor that has been tarnished. Finally, various crises arise due to disobedience to religious teachings, such as social problems and crime among teenagers (Azizan & Yusoff, 2017).
In Malaysia, teenagers' social problems show increase almost every year. Among the issues are adultery, baby dumping, free mixing, rape and drug abuse. For drug abuse in 2017, 25,992 people in Malaysia were reported to involve. Eighteen thousand four hundred forty new cases of drug abuse and 7,484 repeated cases of people involved with drugs were recorded. Of that number, 24,926 were male addicts, while 996 were female addicts. Addicts from the Malays were the most significant at 20,956, and the rest were from Chinese, Indian and other bumiputra ethnicities. Most of those involved were young people aged 13-39 years, and the leading contributing factor was the influence of 16,209 peers (national anti-drug agency, 2018).
While for teenage pregnancies out of wedlock, the ministry of health Malaysia reported 3,938 cases in 2016 and 3,694 cases in 2017. There was a slight decrease in 2018, with 2,873 cases reported (Ministry of Health Malaysia, 2019). The same situation occurs as per the data recorded in the student personality system (SSDM) as it shows 238,790 cases which is five per cent in 2018, increased to 304,578 cases which are 6.4 per cent in 2019. For example, criminal behaviour in 2018 recorded 9,516 cases grew in 2019 to 11,648 cases (Ministry of Education, 2020).
Globalisation has seriously impacted moral and ethical life in society. The influence and impact rendered by mixed forces through the medium of information communication technology also have a great impact on society, especially among teenagers who are students. Muslim school students are also experiencing social issues. Accordingly, a favourable social environment needs to be given attention in forming a student's self-identity before the situation worsens, accumulates and triggers social issues. Actions and emphasis on the question of noble values in the education system in malaysia need to be underscored to produce balanced students from various facets.
---
Social Environmental Factors
The influence of the social environment powerfully shapes student behaviour. Several factors impact students' morals and behaviour; among them are the student's environment, namely the school environment, teachers, parents, peers, society and the mass media. A good atmosphere in the formation of an individual's identity is critical. All of these factors strongly influence the formation of the students' self-identity. Following are some factors that affect students' morals and behavior
---
School and Teacher Environmental Factors
Sang (2013) elaborated that a school climate that is conducive and equipped with advanced teaching and learning facilities will raise the cognitive, affective and psychomotor development of students to a higher level. Rudasil et al (2017) stated that environmental factors and a conducive school atmosphere greatly influence the formation and establishment of a positive school climate. School climate can portray the social interactions, relationships, values, and beliefs held by students, teachers, administrators, and staff. In addition, western researchers also concluded that the school atmosphere is essential in shaping positive behaviour and dramatically impacts the success of students and schools (Freiberg, 1999;Lauren et al., 2017). School conditions are also the essence of a school. The influence of the environment is a fundamental element in the formation of character, way of thinking, attitude and development of school students (Muhamad, 2015;Ahmad & Abdullah, 2017).
Furthermore, a teacher acts as an advisor, consultant, motivator, and specialist expert and oversees student discipline (Azman et al., 2007). According to Don (2005); Som and Ali (2016); Shafiq & Noraini (2018), underlined that teachers or educators play the role of implementing agents and groups that play an important position in executing the curriculum and educating students' personalities. Teachers need to have high skills in teaching methods, master the content of knowledge in the subjects taught, be skilled in applying the theory of human growth and development and be helpful counsellors to students. Teachers and schools are agents of transformation and mould individual potential comprehensively and integrated.
Abdul Muhsien (2014) established that the practice of teacher-student relationships in the establishment of morals was at a moderate level. Most study participants admitted that the teacher-student relationship was already practised, but it happened informally based on the concerns of a GPI. Therefore, it will become more meaningful if efforts towards empowering the formation of morals in schools take a more holistic approach by strengthening aspects of the teacher's role and relationship with students. Further, the conclusions of a study by Makhsin (2012), indicated that the school environment retains a strong influence on forming a person with noble morals.
Besides, Jasmi et al (2007) stated that teachers are a critical element in education and greatly influence the effectiveness of education. The personality and teaching practices of teachers have a significant influence on the mind and souls of students. They will perceive and follow the teacher's behaviour, reactions and words at this young age. The teacher's attitude and appearance will impact students more strongly than others. It is possible as students to spend a lot of time with the teacher. Hence, teachers should improve themselves first before improving their students. Based on the statements and results of the study, it is clear that teachers and schools are the most critical influence in the development of students from a physical, intellectual, emotional and social point of view.
---
Parental Factors
Initiatives to produce quality people are closely related to how education is acquired. Therefore, a pleasant family climate plays an essential role in influencing the success of children together with eminent personalities. The family institution is imperative in shaping children's characters in terms of faith, morals, academics and morals of a person himself.
Parents play an indispensable role in shaping children's education and forming the foundations of a child's self-development. In religion, parents are responsible for instilling religious values at an early stage. As the hadith narrated by Imam al-Bukhari and Imam al-Muslim translated to "No baby is born unless it is born in a state of fitrah (pure from any sin), so it is the two parents who will shape the child either a Jew, a Christian or a Magian."
According to Shaari (2009), parents should have suited educational guidance and knowledge so that the children under their care and custody receive proper guidance and education. Tan et al ( 2013) also stated that the process of teenage moral education that takes place through parental education has an impact on the lives of teenagers. In addition, parents need to monitor their adolescent children, especially their peers. The significance of parenting behaviour in the family also needs to be given serious attention in developing and forming a teenager's identity and personality (Abd Halim, 2017). While according to Abdullah (2000), he underlined that the individuals who most deeply influence children's personalities are parents.
Parents are the most efficacious community in building children's morals to behave well and are responsible for establishing their children's personalities and abilities. Arshat et al (2002) reported that parents must set an example in relationships and be good role models for children. This is in line with the opinion of Sulaiman (2011), which noted that parents need to guide children and be role models for children.
---
The Peer Factor
According to Abdul Latif (2009), peers refer to the same group of children or teenagers, often having the same age range, gender or socioeconomic status and sharing similar interests. Peers are where they express their troubles and substitute parents at school. The influence of peers is the most potent basis in helping self-development and changing the values and attitudes of teenagers when the power of parents and family decreases in the early stages of adolescence (Salleh, 2015). Ulwan (2000) explained that the religion of one's friend impacts a person. Thus, it is crucial to know who is one's friend. It conforms with the words of The Prophet Muhammad PBUH, narrated by Bukhari and Muslim A person's religion lies in his friends, so be careful who you are friends with (al-Bukhari: 1332) Ulwan (2004), also stated that if one befriends lame and corrupt companions, they will follow and be affected by damaging behaviour. According to Abd Rahim (2006), teenagers adapt to the factors of the family environment, social environment and culture that will shape the development of their behaviour. Therefore, students who come from families with family relationship crises can have an impact on the formation of their morals. While Yahya et al (2010) recommended that parents always note who their children's friends are because they are a substantial contributor to the increase in delinquent behaviour of children.
In addition, Kasht (1985) mentioned that socialisation, like friend mannerisms, will bring people closer to Allah SWT if carried out in the manner ordered by sharia. Hall (2008) remarked that among the methods to monitor children are to know who they are friends with, observe the children's activities, see where they are and be selective in choosing their friends. Suria (2012) explained that the media plays a pertinent role, starting from adding knowledge and forming attitudes, perceptions, and trust values. The mass media deliver a lot of transition and information to the community, especially regarding current affairs and foreign and domestic news. In addition, the media is also beneficial to the development of education today. However, at the same time, mass media may also contribute towards adverse outcomes. The study conducted by Tamuri & Ismail (2009) on Form 4 students, discovered that high exposure to the media hurt the religious beliefs of teenagers. Wan Norina et al. (2013) stated that one of the reasons for the moral decay of today's youth is the result of mass media exposure and programs observed in the country. In the context of dakwah, Abd Hamid (2016) noted that media activists should also play the task of delivering dakwah. They should utilise media channels to invite people to believe in Allah SWT and introduce the characteristics of Islamic teachings to the community. The media should play a role in applying Islamic or moral values to educate the community so that they also appreciate these values.
---
Mass Media Factors
---
Community Environmental Factors
Environmental factors have a significant relationship with student achievement. Che Hassan et al (2017) reported that environmental factors influence students' personalities. The transformed structure of society causes today's students to live and mix with a more diverse community. Therefore, environmental support among students is crucial because it produces students who have balanced academically, emotionally and morally. Individuals around them influence students to shape their lives in the future (Yusof, 2010). Yahya et al (2010) documented that the local community needs to work together in the moral maturation of teenagers, and the local community are to be mindful of all potentially harmful behaviour teenagers commit. One must be frank, warn and complain about them to the responsible party if the delinquent behaviour of teenagers transpires and guide them in carrying out their responsibilities as students.
In addition, close relationships with neighbours are necessary and are emphasised by most societies and religions. Relationships in the neighbourhood can train individuals to help and foster good values in a broader context. Noble value is exemplary behaviour between human relationships that includes religious, social, and neighbourhood aspects to form a united society (Abdul Samad, 2010).
---
The Challenge of Moral Formation
Shaping the morals and character of students nowadays is taxing. This is due to factors that affect it from various angles. Among those elements are how parents are raised, domestic disorder, the management of the school environment, the influence of information technology that is readily available today and the impact of a student's living circumstances.
Several social investigations analysed the causes of social issues among teenagers are closely related to the factors of the broken family system, the failure of the family to educate children's beliefs and morals, and the neglect of parents in the role of providing religious education at home (Ismail, 2005;Che Hasniza & Fatimah, 2011). Hence, the key challenge for parents is to provide proper religious education. Sulaiman (2011) stated that one needs to teach kindness to one's children and family. One must too educate them. It means that parents must train their children and need to be role models for them. While, Samsuddin & Sawari (2005) asserted that parents have a great responsibility in disciplining and educating children. Parents play a critical role in educating and shaping children's beliefs so that they become valuable people in this world and the hereafter.
Parents should be exposed to parenting education. Parenting education lets parents master the right ways and methods to enlighten their children. Education as early as birth and continuing until the child matures will produce skilled individuals with good personalities. Children should realise that they must maintain a robust relationship with their parents to communicate and constantly interact positively to improve their personalities (Othman & Khairollah, 2013). According to the Islamic view, parents play an essential role and are responsible for educating their children, especially in religious, moral and physical education.
Further, Lee & Kim (2017) found that more parental control is required to educate their children well. Parents who do not emphasise or pay attention to their children lead to moral decay. Parents must also monitor their children's movements in every association with their friends. Peer influence is decisive in adolescent development. It is possible as most of their time is spent with friends compared to when they are with their parents, thus causing them to be easily influenced by activities and behaviour driven by their peers.
In addition, the principal challenge for teachers and school administrators is to be sensitive to the distinct appreciation of Islamic morals of each student. Therefore, complete and exhaustive moral education is vital to overcome the diversity of differences of each student, and this needs to be paid attention to by all parties (Sarimah et al., 2011). The study by Jusoh & Sharif (2018), concluded that implementing educational programs with spiritual development requires a solid organisation to achieve the objective of spiritual growth to form students' character. While, Surat & Rahman (2022) claimed that the support of the social environment with the active involvement of students in co-curriculum has an impact on students' soft skills, the development of holistic human capital and adaptation in learning. The school also needs to organise a spiritual program, such as religious activities, to enhance students' personalities (Safura et al., 2019).
Likewise, the challenge in dealing with the influence of the mass media on teenagers is undeniable. Teenagers tend to do things they have heard or seen in mass media because they desire to try. Mass media is a necessity for humans today, especially students, thus creating the development of the internet, which has become a new socialisation agent for humans. However, failure to control it will cause the issue of addiction in the mass media, primarily the internet. It harms students negatively, such as cyberbullying, sexting, pornography, and physical and mental health (Kalaisilven & Sukimi, 2019). Meanwhile, Jamilin et al (2011) commented that media exposure can easily influence the culture and thinking of all levels of society, especially today's teenagers. The mass media can control students' morals, leading to addiction to mass media, such as addiction to online video games, and even worse, causing depression, self-harm and death due to mass media.
---
Conclusion
Human value is determined by moral position: the higher the moral, the higher the human worth and dignity. The implementation and formation of morals through education, formally or informally, hence, is a shared responsibility. Ergo, the current education system requires support from all parties in shaping the morals and character of students. Thus, it is suitable for every responsible party in society to play an essential role in developing student morals. The influence of the social environment plays a critical role in contributing towards the formation of students' morals. In line with the development of the modern world today, many challenges have implications in their lives, including sociological and psychological aspects. This situation holds a comprehensive impact on the lives of students. The cooperation of all parties, especially parents, is key in dealing with the challenges faced by students. Environmental support includes humanitarian elements, such as teacher support, parental support, peer support and community support, and non-humanitarian factors, such as the influence of various types of media influencing and affecting student behaviour. A person will become good if he appreciates morals in his life, and a favourable climate impacts the student in shaping his personality. Therefore, praiseworthy and noble morals can have an excellent and positive effect on students facing issues and challenges.
This study has contributed a valuable finding regarding the students' moral development. The application of moral values is fundamental to the formation of a noble personality. The existing education system needs support from all parties involved in adolescent moral education. It is natural for everyone in the community to play an essential role in the development of adolescent morals. Environmental support encompasses humanitarian elements, such as support from teachers, parents, peers, and the community, as well as nonhuman components, such as the influence of various types of media that one receives during their life processes that impact student morale and personality. This also indicates that developing students' potential is not only in the context of schooling; it needs to be done holistically by all parties involved through various social learning approaches. Students need all the support and motivation resources of parents, teachers, classmates, and the community to help promote their soft skills. A person will change for the better if he or she recognises noble morals because noble morals produce a positive impression on daily life. Getting a quality education is a dynamic concept that influences the positive behaviour of a student as a whole. |
Background: Alcohol-related harm has been found to be higher in disadvantaged groups, despite similar alcohol consumption to advantaged groups. This is known as the alcohol harm paradox. Beverage type is reportedly socioeconomically patterned but has not been included in longitudinal studies investigating record-linked alcohol consumption and harm. We aimed to investigate whether and to what extent consumption by beverage type, BMI, smoking and other factors explain inequalities in alcohol-related harm. Methods: 11,038 respondents to the Welsh Health Survey answered questions on their health and lifestyle. Responses were record-linked to wholly attributable alcohol-related hospital admissions (ARHA) eight years before the survey month and until the end of 2016 within the Secure Anonymised Information Linkage (SAIL) Databank. We used survival analysis, specifically multi-level and multi-failure Cox mixed effects models, to calculate the hazard ratios of ARHA. In adjusted models we included the number of units consumed by beverage type and other factors, censoring for death or moving out of Wales. Results: People living in more deprived areas had a higher risk of admission (HR 1.75; 95% CI 1.23-2.48) compared to less deprived. Adjustment for the number of units by type of alcohol consumed only reduced the risk of ARHA for more deprived areas by 4% (HR 1.72; 95% CI 1.21-2.44), whilst adding smoking and BMI reduced these inequalities by 35.7% (HR 1.48; 95% CI 1.01-2.17). These social patterns were similar for individual-level social class, employment, housing tenure and highest qualification. Inequalities were further reduced by including either health status (16.6%) or mental health condition (5%). Unit increases of spirits drunk were positively associated with increasing risk of ARHA (HR 1.06; 95% CI 1.01-1.12), higher than for other drink types. Conclusions: Although consumption by beverage type was socioeconomically patterned, it did not help explain inequalities in alcohol-related harm. Smoking and BMI explained around a third of inequalities, but lower socioeconomic groups had a persistently higher risk of (multiple) ARHA. Comorbidities also explained a further proportion of inequalities and need further investigation, including the contribution of specific conditions. The increased harms from consumption of stronger alcoholic beverages may inform public health policy. | Background
Alcohol consumption is a leading risk factor for population health worldwide [1]. Measures of alcohol-related harm such as hospital admissions and mortality show particularly wide inequalities and reducing inequalities is a focus of governments [1][2][3][4]. Alcohol-related harm has been found to be higher in disadvantaged groups, despite comparable or even lower reported alcohol consumption than in advantaged groups [5,6]. This phenomenon has been termed the 'alcohol harm paradox'. A number of hypotheses to explain it have been suggested in the literature [5,[7][8][9].
The first hypothesis is that there may be different patterns of alcohol consumption across groups rather than simply unit consumption or whether a threshold of consumption is reached. Overall, average consumption may not differ between groups but if all alcohol is consumed in one sitting peak toxicity is greater in those who binge drink. More deprived groups are more likely to drink at extreme levels, potentially in part explaining the paradox [8]. The type of alcoholic beverage may also offer an explanation. Consumption of spirits or beer has been associated with worse "trouble per litre" than wine, and consumption of spirits have been associated with increased alcohol poisoning and aggressive behaviour [10,11]. It has also been suggested that the poorest outcomes are found for beverages chosen by young men [10]. A potential mechanism could be the faster absorption of alcohol from stronger drinks or other characteristics of the people with a particular beverage preference, but the reasons for differing outcomes by beverage type are not well understood.
The second hypothesis concerns the combination of challenging health behaviours or comorbidities typically found in more disadvantaged groups. This combination causes proportionately poorer outcomes compared to similar alcohol consumption in advantaged groups. Deprived higher risk drinkers were found to be more likely to drink alcohol combined with other "health-challenging behaviours that include smoking, being overweight, poor diet and lack of exercise" compared to more affluent groups [7]. There are also known associations between mental health and alcohol consumption which could affect disadvantaged groups differently [12].
The third hypothesis relates to underestimating consumption in disadvantaged groups and the alcohol harm paradox not existing or being an artificial construct. Response bias may be at work where those who do not respond to the survey could have systematically different consumption levels or worse outcomes compared to responders [13]. Moreover, current drinking may not reflect the life history of harmful drinking, which has been found to be associated with deprivation in lower and increased risk drinkers [7].
A few recent cross-sectional studies have investigated the harm paradox, but mostly considered drinking patterns and their influence on the paradox rather than outcomes of harm [7,8]. Only one longitudinal study in Scotland has employed record-linkage between consumption patterns and harm, investigating socioeconomic status as an effect modifier, but did not include the type of beverage or multiple admissions [5].
This study aims to investigate whether and to what extent individual alcohol consumption by type of beverage, smoking, BMI and other factors could account for inequalities in alcohol-related hospital admission (ARHA). A different risk of harm by socioeconomic group for a given level of individual consumption could be an explanation of the alcohol-harm paradox at group level. Additionally, we examine how the patterns of consumption by type of beverage differ by socioeconomic group.
---
Methods
---
Data
This analysis was carried out using the Electronic Longitudinal Alcohol Study in Communities (ELAStiC) data platform and details on the data and linkage methods are outlined in the study protocol [14]. A summary and further specific details for this study are described below.
---
Welsh health survey
Our cohort consisted of 11,038 people aged 16 and over who responded to the Welsh Health Survey in 2013 and 2014, consenting to have their survey responses linked to routine health data. The Welsh Health Survey is an annual population survey on health and health-related lifestyle based on a representative sample of people living in private households in Wales (random sampling). It consists of a short interview with the head of household and a self-completed questionnaire for each individual adult aged 16 years and above in the household. A question on consent for data linkage was included from April 2013 to December 2014 and approximately half of the respondents agreed. Originally 11,694 respondents agreed to their data being linked, and records were successfully linked and anonymised into the SAIL Databank through standard split file processes for 11,320 individuals (3.2% loss) [14]. Linkage to records of household residence needed for analysis failed for 282 respondents, resulting in the final sample of 11,038 people (5.6% loss overall). An overview of characteristics of the study population is shown in Table 1.
---
Measures of socioeconomic status
We used an area-based deprivation measure (i), the Welsh Index of Multiple Deprivation (WIMD) 2011 [15], as well as four individual-level measures of socioeconomic status from survey responses (ii) social class, iii) employment, iv) housing tenure, and v) highest qualification). We linked the WIMD to each Lower layer Super Output Area (LSOA) of residence at survey month. We grouped the two more deprived quintiles and three less deprived quintiles because of relatively small numbers.
---
Alcohol consumption
Respondents were also asked about the frequency of drinking, including whether or not they had drunk alcohol at all during the past year and the number of each type of alcoholic beverage they had consumed on the heaviest drinking day in the past week. These include categories of, for example, "small can of strong beer", "small glass of wine", as well as free text for additional drinks not listed. These data were converted into units (8 g ethanol per unit) consumed by beverage type, and capped at 60 units to deal with a very small number of responses of between 60 and 120 units, likely a misreading of units. We created three groups: 1) beer and cider; 2) wine and champagne; 3) spirits, alcopops, fortified wine and others. There were relatively small numbers of alcopops, fortified wine and others and so we combined these with the spirits. Our sensitivity analysis showed that the inclusion of these drinks did not alter the results for this category which was predominantly made up of spirits.
---
Outcome measure of alcohol-related hospital admission
The outcome was (multiple) alcohol-related hospital admission(s). We selected the earliest episode in each hospital spell with a wholly attributable diagnosis included in the definition outlined in the study protocol [14]. These are similar to the alcohol-specific definition used by Public Health England with a few additional codes [14,16]. These could be the primary diagnosis or a secondary diagnosis in any position. This included multiple admissions for survey respondents. The details of the data source, linkage and extraction are outlined in the study protocol [14].
---
Other survey measures
Other measures used based on survey responses were smoking, BMI, general health and being treated for a mental health condition. Smoking was coded into three categories: 1) regular or current smoker, 2) Ex-smoker and 3) never smoker. BMI was readily calculated based on self-reported height and weight. Respondents were asked about their general health which we coded into the following two groups: 1) Poor and fair health, 2) good, very good and excellent health. Respondents were also asked whether they were currently being treated for depression, anxiety or another mental illness (yes/no). This was coded into a binary variable with values of being treated for any mental health condition listed or not treated if none was selected.
---
Study design/processing
Survey responses were record-linked within the SAIL Databank to hospital admission data (Patient Episode Database for Wales), mortality data (Annual District Death Extract from the Office for National Statistics) and data containing residence and thus house moves (Welsh Demographic Service Dataset) as outlined in the study protocol [14]. All data was extracted for eight years before the survey month until the end of the year 2016. The study period ran from three years before the survey in 2013 or 2014 to the end of 2016, with a study period of between five and six years depending on when the survey was undertaken. We structured the data so that each person could contribute multiple time periods, if they had an admission, with the number of admissions up to the current time period counted during the study. We also considered the number of historic alcoholrelated admissions during the five years before study start (i.e. 8 years before to 3 years before the survey date, or 2005-06 to 2010-11) as a covariate in the modelling analysis. We censored for death or moving out of the study area (Wales). An illustration of the study timeline is shown in Fig. 1. We also performed a sensitivity analysis using the data restricted to time periods after the survey date only (2013/14 to the end of 2016) for comparison.
---
Statistical analyses
We estimated hazard ratios (HR) with 95% confidence intervals (95% CIs) for the risk of (multiple) alcoholrelated hospital admission associated with each socioeconomic group using multi-level Cox mixed effects models [17]. We used a recurrent event model with admission as the outcome and using age as the underlying timescale rather than calendar time. We used Cox proportional hazards models stratified by the current count of admission events to date (during the study period), so that each unique admission count has a separate baseline hazard function. Including admission counts during the study period as strata accounts for covariance within an individual's recurrent events and is similar to a frailty model [18]. Details of covariates in each model are given below, but in every case their hazard ratios were assumed constant across strata. Additionally, a random effect at the household level was used in the multilevel analysis to allow for potential similarities in responses within a household over and above their individual characteristics. All analyses were conducted using R [20], specifically using the coxme function [21]. To deal with missing observations for BMI, unit consumption, smoking and individual-level socioeconomic measure we used 20 iterations of multiple imputation using chained equations using the package MICE in R [19]. This was chosen for efficiency to avoid reducing the sample size.
The number of historic events during the 5 years before study start was included as a covariate in all models. This was chosen to account for differences in risk of the next admission, because people with a prior admission were more likely to have another admission than those who did not.
The first basic model (Model A) adjusted for area deprivation, sex and the number of historic ARHA during 5 years before study start. Model B additionally adjusted for the number of units reported by drink type (beer and cider; wine and champagne; spirits including alcopops) on the heaviest drinking day in the past week, smoking status and BMI. We repeated the basic and adjusted model using area deprivation (i) for all other individual measures of socioeconomic status, ii) social class, iii) employment, iv) housing tenure, and v) highest qualification, to compare estimates in the basic model with those of the adjusted model. We also included an interaction term in adjusted Model B between BMI and total unit consumption.
Model C, also based on the adjusted model B, additionally included self-reported general health, and Model D added self-reported treatment for a mental health condition to investigate comorbidities.
Two additional models were used to investigate the contribution of the units for each specific beverage type to inequalities. These were based on Model A, but also included the total units consumed and, separately, the units for each type of drink as covariates (results not shown). Another model included the frequency of drinking (results not shown).
For the sensitivity analysis we have re-run all models above on the limited dataset including only the time periods following the survey date. The results were compared to the main results using the extended dataset.
Finally, we also analysed the mean units of alcohol consumed by beverage type and by age, sex and deprivation group, including 95% confidence intervals (Fig. 2). To show the distribution of units in each group we have also included boxplots for any type of beverage
---
Results
---
Sample characteristics
Our study sample consisted of 11,038 respondents with a total of 63,638.9 person-years of follow-up. There were 279 alcohol-related admissions during the study period (131 individuals with one or more admission). The crude rate per 1000 person-years was 4.38. An overview of our sample characteristics is shown in Table 1 . There were more females than males. Key demographic data was complete in the survey but there were missing responses to some of the individual survey questions, ranging from 0.6% for drinking frequency to 4.9% for BMI. Modelling analyses use imputation to deal with missing responses, but Table 1 shows completed and valid responses only and therefore the sums for each characteristic may be different, for example between sums for alcohol consumption and smoking status.
---
Patterns of consumption
Deprived groups had larger proportions of people who reported not drinking at all in the past year (15% compared to 11%, Table 2), and also higher proportions who did not drink in the past week but reported some drinking in the past year (47% compared to 37%, Table 2). However, those who drank in the deprived group had slightly higher proportions of people who binged (more than 4 units for men and more than 3 units for women) on a single occasion, with 25.8% in the deprived group compared to 23.6% in the less deprived group. This suggests that fewer people drank in deprived groups but, those who had any alcohol, drank more. Some of those who either did not drink at all in the past year, or reported some drinking in the past year but no units in the past week had an alcohol-related admission at some point during the study period. This could suggest that ongoing health concerns might explain their abstinence [22].
Overall, the mean units of total alcohol consumed were similar or slightly higher in the more deprived group than the less deprived group for males but similar or slightly lower for females (Fig. 2). If only those who drank are compared (chart not shown) then men in the more deprived group drank more on average than men in the less deprived group for all age groups with smaller differences in women.
Socioeconomic patterns differed by type of beverage. Similar to any type, mean units of beer were slightly higher in more deprived groups, and unit consumption much higher for men than women. The pattern for wine was the opposite showing lower consumption in more deprived, with the exception of the youngest men. More spirits were consumed by younger drinkers with only slightly lower averages for the deprived group. There was little difference in the more deprived group in most other age groups of those aged 30 and above compared to less deprived groups. The box plots in Fig. 3 for units of any type of beverage show that the distribution is skewed towards lower reported units reflecting the large proportion of people reporting zero units, particularly in the youngest and oldest age groups. The medians for younger males in more deprived groups are lower than the less deprived, and for females the medians are lower in the more deprived for most age groups.
---
Factors associated with alcohol-related hospital admission
A total of 131 out of 11,038 respondents had at least one ARHA during the study period. Women tended to have a lower risk of admission than men (HR 0.71; 95% CI 0.51-0.99, Model A in Table 3), although this was only statistically significant in Model A, and not in the fully adjusted Model B. Smoking had the strongest association with alcohol-related hospital admission and smokers were 4.53 times more likely to have an admission (HR 4.53; 95% CI 2. 85-7.21, Model B) than those who were never smokers. Ex-smokers were 1.50 times more likely to have an admission compared to the same reference group, although this was not statistically significant. BMI appeared to be slightly protective, but it was not statistically significant (HR 0.98; 95% CI 0.94-1.01, Model B). We also investigated the interaction between BMI and total unit consumption based on Model B but we found no evidence for an interaction (results not shown).
Unit increases of spirits drunk were positively associated with increasing risk of ARHA (HR 1.06; 95% CI 1.01-1.12, Model B), higher than for other drink types. Unit increases for beer and wine were, however, not statistically significant. The reported frequency of consumption suggested an elevated risk of ARHA for those who did not drink in the past year and those who drank weekly relative to those who drank less than weekly, although not statistically significant (results not shown). An increased risk for those who did not drink at all might suggest that these are ex-drinkers who have stopped drinking perhaps due to poor health. Due to the relatively small sample size we could not analyse exdrinkers separately.
People with poor health had an elevated risk of ARHA (HR 2.89; 95% CI 1.91-4.37, Model C) compared to those who considered themselves in good health. Similarly, people who were currently being treated for mental illness had a much higher risk of ARHA than those who did not (HR 2.66; 95% CI 1.72-4.11, Model D). Although this will need further research relating to interactions and specific conditions, it does suggest that comorbidities, either relating to alcohol or otherwise, could be important.
The number of historic admissions before study start was significantly associated with a higher risk of ARHA. We treated this not as a "risk factor" itself, but as merely indicative of the likely presence of other (unmeasured) risk factors.
---
Inequalities in the risk of alcohol-related hospital admission
People living in more deprived areas had a higher risk of ARHA (HR 1.75; 95% CI 1.23-2.48) compared to less deprived (Table 3). In an interim model adjusting for units of alcohol drunk only (results not shown), there was little change (4%) in the risk of ARHA for more deprived areas (HR 1.72; 95% CI 1.21-2.44). Adjustment for smoking status and BMI in model B reduced the risk of ARHA by 35.7% (HR 1.48; 95% CI 1.01-2.17). We found a similar pattern for all socioeconomic measures, area-based or individual-level, of a reduced but still persistently higher risk in disadvantaged groups after adjustment (Table 4). For example, using social class, people in the "Routine and manual" class had a higher risk of ARHA (HR 2.03; 95% CI 1.30-3.15) compared to the "Professional and managerial" class. After adjustment in the full model the risk had slightly reduced but is still substantially higher (HR 1.81; 95% CI 1.09-3.00) than the comparison group.
Adjusting for the total number of units regardless of type of beverage (results not shown) gave very similar results to Model B with an elevated risk of ARHA in the most deprived group (HR 1.46; 95% CI 1. 01-2.11). This suggests that the type of beverage was not important over and above the number of units relating to inequalities.
For models C and D the risk of ARHA in the more deprived group was reduced further compared to Model B (Poor health by 16.6%: HR 1.36; 95% CI 0.92-2.00; being treated for mental health condition by 5.0%: HR 1.45; 95% CI 0.96-2.17, Table 5). This risk in disadvantaged groups, although still elevated, was not statistically significant. Although this will need further research relating to interactions and specific conditions, it suggests that comorbidities, either relating to alcohol or otherwise, could be important.
---
Sensitivity analysis using limited dataset following the survey date only
Using the data limited to the time periods following the survey date there were 131 admissions, 60 in the less deprived and 71 in the more deprived group. There were 33,067 person-years of follow-up. The model results and conclusions drawn overall are similar, but due to smaller number of events most results were not statistically significant (Table 6 in Appendix 1). Inequalities based on area deprivation were slightly narrower, and inequalities based on individual-level socioeconomic measures slightly wider before adjustment compared to the main analysis shown in the paper. Adjustment for alcohol consumption by type, smoking and BMI reduced inequalities, and as before a higher risk of ARHA in disadvantaged groups remained. Adjustment resulted in a similar reduction of the hazard ratio in the repeated Model A and Model B for area-deprivation, but due to smaller inequalities yielded a slightly higher percentage reduction than the extended dataset. Adjustment for poor health or mental health also reduced inequalities further. The risk of ARHA by type of drink was also similar, with the highest risk for spirits. The sensitivity analysis showed that the results are comparable to those shown in the paper using the extended dataset. We decided to sacrifice a small amount of bias relating to the timing of the survey in favour of reducing variance and used the extended analysis as the main analysis in this paper.
---
Discussion
The main aim was to investigate whether and to what extent adjustment for individual alcohol consumption by type of beverage and other factors could explain inequalities in alcohol-related hospital admissions and therefore help explain the alcohol harm paradox. We found that consumption by beverage type did not help to explain inequalities in alcohol-related harm, despite consumption by type being socioeconomically patterned. Adjustment for individual-level units by type of alcohol drunk only very slightly reduced inequalities in ARHA, similar to all units combined. Smoking and BMI accounted for part of the differences, reducing inequalities by 35.7%, but deprived groups still had a persistently higher risk of ARHA, having considered multiple admissions. This pattern was similar for area-based deprivation or individuallevel socioeconomic measures. Our findings on inequalities are broadly similar to a previous study [5] which found that disadvantaged groups had consistently higher alcohol-attributable outcomes, having considered similar total alcohol consumption, BMI and smoking. They analysed quintiles of deprivation and more subgroups for the individual socioeconomic measures, as well as a slightly different definition and so a precise direct comparison of the extent of inequalities and effect of the adjustment is difficult. Their study design is also different in analysing the time to the first admission whilst excluding those with a prior admission. Our analysis includes multiple hospital admissions during the study period as well as information on historic admissions. We found historic admission to be an important factor for the risk of another admission. Thus, we incorporated people with multiple admissions during the study period, who use more health service resources and their exclusion or censoring after one admission could potentially exclude certain patterns. For example, descriptive statistics issued by government or health services can include the same people in successive time periods in cross-sectional analyses.
Including the type of beverage in our analysis was novel. Unit consumption per type of drink is not usually available in survey data, either record-linked or not. Whilst beverage type was not important relating to inequalities in ARHA, there were differences in the risk of ARHA by type of drink. Spirits had the highest increase of risk of ARHA per unit increase consumed. A Finnish study found that consumption of spirits increased in direct proportion to overall consumption as part of binge drinking sessions although not investigating subsequent alcohol-related harm [11]. They suggested that whilst beer was consumed in large quantities at a variety of drinking occasions, spirits were "needed to get really drunk" [11]. Others have argued that the most harmful drink is "whatever young men are drinking" [10]. In our study, the average spirit consumption is highest in the younger age group, although higher in young women than in men. The mechanism for increased ARHA for spirits needs further attention and could be due to the faster absorption of alcohol from stronger drinks in one binge drinking session or "pre-loading" before going out in younger people. If policy sought to tackle stronger drinks in particular, they may, however, be replaced by other types rather than reducing harmful consumption.
The alcohol harm paradox is based on deprived groups drinking similarly or even less than advantaged groups on average. In our study, average binge drinking was slightly higher in deprived groups than less deprived. The mean units for any type of alcohol, however, were similar or lower in deprived groups for most age groups. There were differences in proportions of non-drinkers between deprivation groups that influence the averages. This might suggest that the alcohol harm paradox could in part be an artificial construct, particularly when relying on binge drinking measures beyond a threshold instead of individual units, related to the third hypothesis. In our modelling analysis we focussed on inequalities given similar consumption, thereby adjusting for slightly higher average consumption in more deprived groups in our sample, and investigating an important part of the alcohol harm paradox. The type of beverage showed different socioeconomic patterns, in line with international findings on "trouble per litre" [10] and a study in England [7]. The deprived group drank more beer (or cider), but less wine compared to less deprived. The average units of spirits were similar in the deprived and less deprived group in those over the age of 30, but slightly lower in deprived younger people. This may support the finding elsewhere that the paradox may be more concentrated in men and younger age groups, as the association between consumption and socioeconomic status increased with age [9]. Whilst there may not be any inherent difference between units by type and resulting harm, choices may be indicative of different drinking occasions such as binge drinking or other individual factors.
In our models we also investigated self-reported health status and, separately, being treated for a mental health condition. Either adjustment reduced inequalities in ARHA further, suggesting that comorbidities may explain some of the alcohol harm paradox. Socioeconomic deprivation has been shown to be associated with multimorbidity, particularly mental health conditions [23]. These may also include conditions related to smoking, which we have accounted for in our models, and may explain the relatively small effect of comorbidity reducing inequalities in our models. We were restricted by sample size and study design to analyse this in more detail, but further research should investigate comorbidities further, including specific conditions.
As with all longitudinal studies, following people over time yields detailed information about the dynamics of response to exposures. Another key strength of our study is the use of record-linkage of individual-level alcohol consumption and other factors to alcohol-related harm, as well as multiple measures of socioeconomic disadvantage. To our knowledge this is the first longitudinal linkage study on the alcohol harm paradox investigating the type of beverage and considering multiple admissions. It takes full advantage of the richness of the data through multi-level multi-failure modelling, imputation for missing data, and censoring for migration and death. There are, however, some limitations relating to the data.
The main limitation relates to the relatively small study sample of just over 11,000 respondents and the fact that only around half of those asked agreed to data linkage. This meant that the number of events was also relatively small with 279 admissions in 131 individuals, but were reflecting the uncertainty in the models appropriately. Failure of linkage of survey respondents to residence data was small (3.2%). Further details on linkage of this dataset are included in the ELAStiC study protocol [14]. We have compared the demographic characteristics of our sample to the total sample for both years outside of the record-linked environment and found that the distribution by age and sex is fairly similar. The reported binge drinking patterns by age and sex were also found to be similar, although proportions were slightly lower in our sample.
Whilst we have been able to compare alcohol consumption in our sample and the total sample, it is possible that the study sample is different in terms of their ARHA and potentially not population representative. Even with higher consent for linkage a Scottish study found that underestimation of consumption in surveys was likely to be socioeconomically patterned, as was linked alcohol-related harm [13]. The available sample size also meant that we needed to group the more deprived 40% and the less deprived 60% rather than analysing deprivation quintiles. This allowed detection of significant effects, but meant that we are underestimating the extent of inequalities between the more extreme ends of the deprivation gradient. However, we were able to repeat the analyses using individual-level socioeconomic measures allowing some validation of the patterns found, and our results were similar to the only other comparable longitudinal study. Using only conditions wholly attributable to alcohol in our analysis is also underestimating the wider alcohol-related harms where alcohol is only in part responsible.
One of the explanations of the alcohol harm paradox relates to the accuracy of the measure of consumption. We had to assume that reported consumption and other factors are constant throughout the study period, estimated from the survey response in the middle of the study period rather than baseline. We acknowledge the possibility that respondents may have changed their drinking or the reporting of their drinking following a hospital admission and thus the possibility of reverse causation. To circumvent this possible source of bias we performed a sensitivity analysis, using data limited to time periods following the survey date only, which showed substantively similar results. We therefore decided to sacrifice a small amount of bias relating to the timing of the survey in favour of reducing variance. In our study we found a small number of respondents who reported not drinking at all in the past year but having an ARHA during the study period. They could be "sick quitters" who may drink less due to excessive alcohol use in the past or ill health, and likely to have different outcomes to other non-drinkers. Our main measure is selfreported unit consumption, including by type of drink, for the heaviest drinking day in the past week. It may be more indicative of binge drinking in one session than overall units consumed, for example following weekly consumption guidelines. Whether at baseline or not, responders may not recall their actual consumption or give favourable estimates, or their drinking in the past week, as is commonly asked in many surveys, is not representative of their usual or overall consumption. There are some respondents who did not drink in the past week or below binge levels but also had an ARHA.
Reducing inequalities in health is a major goal of governments, and included in the United Nations sustainable development goals [24], and the Wellbeing of Future Generations Act in Wales [2]. Alcohol policy aiming to reduce consumption in populations as a whole, including taxation and reducing availability internationally, tends to have a greater effect on poorer drinkers than on richer drinkers, and may help reduce inequalities in alcohol harm [1]. However, it is not clear whether heavy drinkers with the worst outcomes are affected equally. Some have advocated more focus on targeting specific sub-groups such as extreme drinkers living in poverty or long-term unemployed men [8]. The Welsh Government are due to introduce a minimum unit pricing policy in Wales during 2020 [25], which will likely increase the price of very cheap spirits in supermarkets or off-licences, but may not change prices of spirits in bars or pubs greatly. Future research is needed to investigate whether and how alcohol-related harm may change as a result, particularly with respect to inequalities. Our results relating to increased harm from spirits could help inform policy and the development of interventions around promotions of stronger drinks.
---
Conclusions
Considering consumption by type of beverage did not help explain inequalities in alcohol-related harm, despite consumption being socioeconomically patterned. Smoking and BMI explained part of these differences, reducing inequalities by 35.7%, but deprived groups still had a persistently higher risk of (multiple) ARHA. Although more people in deprived areas were abstaining from alcohol, those who consumed alcohol drank more heavily. Deprived drinkers drank more beer (or cider) and in most age groups also spirits, but less wine compared to less deprived drinkers. Whilst type of beverage was not important relating to inequalities in ARHA, there were differences in the risk of ARHA by type. One potential mechanism for the increased ARHA for spirits could be the faster absorption of alcohol from stronger drinks in one binge drinking session or "pre-loading" before going out in younger people. Our results could help inform interventions on reducing promotions of stronger drinks. The minimum unit pricing policy due to be implemented in Wales during 2020 will likely increase the price of some spirits in supermarkets and off-licences and our results may inform research evaluating the effect for type of beverage, but also inequalities in alcoholrelated harm. Future research should also investigate comorbidities further as an additional explanation of the alcohol harm paradox and wider social inequalities. HR: hazard ratio; 95% CI: 95% confidence interval; N = 11,038; 60 events in less deprived, 71 in more deprived protecting safe haven and remote access system referred to as the SAIL Gateway. SAIL has established an application process to be followed by anyone who would like to access data via SAIL at https://www.saildatabank. com/application-process .
---
Availability of data and materials
The datasets used in this study are available in the SAIL Databank at Swansea University, Swansea, UK, but as restrictions apply they are not publicly available. All proposals to use SAIL data are subject to review by an independent Information Governance Review Panel (IGRP). Before any data can be accessed, approval must be given by the IGRP. The IGRP gives careful consideration to each project to ensure proper and appropriate use of SAIL data. When access has been granted, it is gained through a privacy-
---
Abbreviations
95% CI: 95% confidence interval; ARHA: Alcohol-related hospital admission; BMI: Body Mass Index; ELAStiC: Electronic Longitudinal Alcohol Study in Communities; HR: Hazard ratio; LSOA: Lower layer Super Output Area; SAIL: Secure Anonymised Information Linkage Authors' contributions DF and AG designed the study, AG performed all analyses and drafted the manuscript. AA, SP and SM designed the ELAStIC project, and AA and LT worked on specifications, data linkage and extraction. All authors commented on the manuscript and approved the final version.
---
Appendix Table 6 Sensitivity analysis using limited dataset following survey date only: comparison of model results for each socioeconomic measure
Ethics approval and consent to participate Approval for the use of anonymised data in this study, provisioned within the Secure Anonymised Information Linkage (SAIL) Databank was granted by an independent Information Governance Review Panel (IGRP) under project 0336. The IGRP has a membership comprised of senior representatives from the British Medical Association (BMA), the National Research Ethics Service (NRES), Public Health Wales and NHS Wales Informatics Service (NWIS). Usage of Welsh Health Survey data was approved by Welsh Government. The use of anonymised data for research is outside the scope of the EU General Data Protection Regulations (GDPR) and the UK Data Protection Act.
---
Consent for publication Not applicable
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Characteristics and patients' portrayals of Norwegian social media memes. A mixed methods analysis. |
Background: Despite reports on troublesome contents created and shared online by healthcare professionals, a systematic inquiry of this potential problem has been missing. Our objective was to characterize the content of healthcareassociated social media memes in terms of common themes and how patients were portrayed.
---
Materials and methods:
This study applied a mixed methods approach to characterize the contents of Instagram memes from popular medicine-or nursing-associated accounts in Norway. In total, 2,269 posts from 18 Instagram accounts were included and coded for thematic contents. In addition, we conducted a comprehensive thematic analysis of 30 selected posts directly related to patients.
Results: A fifth of all posts (21%) were related to patients, including 139 posts (6%) related to vulnerable patients. Work was, however, the most common theme overall (59%). Nursing-associated accounts posted more patient-related contents than medicine-associated accounts (p < 0.01), but the difference may be partly explained by the former focusing on work life rather than student life. Patient-related posts often thematized (1) trust and breach of trust, (2) difficulties and discomfort at work, and (3) comical aspects of everyday life as a healthcare professional.
---
Introduction
The arrival and spread of online social media have introduced possibilities and challenges for society all around the world. For healthcare students and professionals, the implications of online presence and behavior are still emerging and e-professionalism is a construct comprising "the attitudes and behaviors (some of which may occur in private settings) reflecting traditional professionalism paradigms that are manifested through digital media" (1). Unfortunately, studies have revealed that e-professionalism is difficult, especially for students (2)(3)(4)(5)(6), and concerns have recently been raised across countries that certain forms of online humor published by healthcare workers conflict with professional values (7)(8)(9)(10). These concerns have, however, been anecdotal in nature and systematic characterization of such material is lacking.
Humor is a complicated matter in terms of professionalism and serves multiple functions for healthcare professionals. It can facilitate communication, support therapeutic processes or act as a strategy to cope with demanding situations and difficult emotions (11,12). By sharing challenging experiences through jokes, healthcare workers remind each other that struggling and making mistakes are common, without afflicting shame or guilt (7). However, not every form of humor aligns with the professional norms in healthcare. Stigmatized groups seem to be especially vulnerable to ridicule (13). Dark humor, ridiculing tragic events and suffering can be a useful tool in the face of distress, but may appear uncanny, hostile or offensive from the outside (14). In some cases, humor can become abusive or degrading in respect of vulnerable patients (15,16). Thus, there has been a call for the education of healthcare professionals to also address the use of humor (17) as part of the wider "hidden curriculum" (18).
Memes constitute a genre of humor that has gained attention in relation to troublesome online contents (7)(8)(9). A meme is typically an image or short video annotated with text shared in social media. Examples are not reproduced here for legal reasons, but illustrative examples have been published by Berre and Peveri (9), Harvey (7), and Song and Crowder (10). The social media platform Instagram, which is intended for image and video sharing, has about 2.8 million users in Norway, corresponding to 67% of the adult population, and more than half of those between 18 and 50 years of age report daily use (19). The use is highest among young women (18-29 years) where 89% has an Instagram account. The potential for wide outreach is thus considerable and problematic contents produced by healthcare students have already caused concerns among educators (9). The lack of systematic knowledge regarding the contents of these images and videos makes it impossible to assess the prevalence of problematic material and restricts how educators can thematize this phenomenon in terms of e-professionalism.
To address the need for systematic descriptions of social media memes, this paper employs a mixed methods approach to characterize Norwegian healthcare-associated memes posted on Instagram. The aim of this study is to provide systematic knowledge to guide and support public discussions regarding healthcare professionalism and humor in social media, and to identify areas where social media memes can be used as a resource for professional identity formation in healthcare education.
---
Materials and methods
---
Data collection
Google was used to search for an initial list of relevant accounts (search queries: "medisin memes site:instagram.com" and "sykepleie memes site:instagram.com"). The search was conducted on June 16th, 2021. For each account with less than 500 followers, the lists of followers and followings were manually reviewed, and relevant accounts noted. The process was repeated until no more relevant accounts could be identified. Accounts were included in the study according to the inclusion and exclusion criteria in Table 1 and categorized as related to nursing or medicine, and the number of followers and followings was recorded. From the selected accounts, all posts published prior to June 1st, 2021 were assessed for eligibility. The delay between June 1st and 16th was assumed enough for the posts to receive representative reactions in form of likes and comments. Images, videos, date, caption, and the number of likes and comments were extracted for each post. The publication date of the first post from each account was used to calculate account age.
The study was approved by the Norwegian centre for research data (NSD, reference number 128255) and the included accounts were notified and received written information about the study in line with privacy regulations.
---
Quantitative analysis
The quantitative analysis aimed to (1) characterize the popularity of various themes and (2) explore whether specific themes affect the response to the posts. Codes were developed by two authors in collaboration from a set of 100 randomly selected posts and independently validated by three coders. The interrater reliability of each code was assessed by Gwet's Agreement Coefficient 1 (AC 1 ). AC 1 is robust to the Kappa Paradox where Cohen's Kappa underestimates agreement in the case of skewed data, i.e., when the prevalence of some codes is small (20). The codes were refined and independently tested until satisfactory interrater reliability was reached (AC 1 > 0.40), except for codes that were expected to show large inter-rater variability (i.e., Vulnerable patient and Offensive). Next, each post was randomly assigned to three independent coders. The coders had an option to flag posts for review if they were difficult to code and posts were excluded if all three coders found it difficult to assign suiting themes. To improve validity, only codes applied by ≥ 2 coders were kept for analysis. All posts marked for review were evaluated by two authors in collaboration and recoded.
The proportion of posts belonging to each theme was calculated by account to compensate for the varying number of published posts. These proportions were used to calculate correlation between themes (Spearman's correlation coefficient) and compare prevalence between professions (Kruskal-Wallis test).
Linear mixed models were used to assess the effect of specific themes on the number of reactions (likes and comments). The number of reactions to a post depended on the number of followers of the account at the time of posting. To account for this, we devised a case-control comparison by selecting four control posts for each theme-related post (case). For each post related to a specific theme (e.g., student life), the two previous and next posts not related to that theme were selected from the same account (Supplementary Figure 1). Thus, multiple case-control groups of 3-5 posts were created for each theme. Next, the number of reactions was standardized by dividing on the standard deviation of the corresponding control posts. The regression coefficient can then be interpreted in terms of how many standard deviations a specific theme will increase or decrease the number of reactions. Nested clustering (case-control group nested within account) was included in the model as a random intercept. Profession and theme were included as fixed effects, as well as their interaction. Independent models were fitted for the number of likes and comments.
Bootstrapping was used to estimate confidence intervals for the regression coefficients. The case-control groups were stratified by account and resampled with replacement. Next, new linear mixed model regression coefficients were estimated from the bootstrapped samples. Finally, the 2.5 and 97.5% percentiles were extracted and regarded as 95% confidence intervals for the coefficients.
All calculations were conducted in R version 4.0.2 (21) and p-values were adjusted within test with the Benjamini-Hochberg procedure. Visualizations were made with the UpSetR (22), corrplot, and ggplot2 (23) packages for R.
---
Qualitative analysis
Qualitative analysis aimed to provide rich descriptions of how the memes portrayed patients and their relatives and to explore characteristics of professionally problematic posts. To this end, 15 problematic posts and 15 unproblematic posts were systematically selected for focused discussions.
To identify problematic and unproblematic posts, the posts (n = 491) including patients/relatives were scored by offensiveness on a numerical rating scale from 0 (not offensive) to 10 (highly offensive) by at least three authors. The 15 posts with highest and lowest mean score were considered most and least offensive, respectively, and selected for comprehensive qualitative analysis. The qualitative analysis was conducted using a methodology originally designed for analysis of press photograph story (24) and later adapted for social media analysis (25). Four authors jointly reviewed all selected posts through focused discussions and the following features were detailed for each selected post (Supplementary Table 2): (1) Uninterpreted content, (2) Text, (3) Interpreted content, (4) Humor, (5) Caption, (6) Offensiveness, and (7) Theme. Two experienced qualitative researchers (BPM and BS) reviewed the selected posts independently to cross-check that identified themes corresponded to the overall impression. A consensus on the most prominent message in each meme was achieved through thorough discussion.
---
Results
---
Accounts and posts
After the initial Google search and review of lists of followers and followings, 51 accounts were assessed for eligibility. Of them, 18 accounts were included and categorized as related to medicine (n = 5) or nursing (n = 13). Account characteristics are shown in Table 2. In total, 2,319 posts had been published prior to June 1st, 2021. The median (range) number of posts per account was 96 . Not all accounts were actively publishing posts at the time of the study. The median (range) time span from the first to the latest post was 284 (14-979) days, and the median (range) number of posts per month was 11.9 (2.2-86.8).
In total, 16 posts were marked for review by all three coders and excluded from further analysis, whereas 227 were flagged for review by one or two coders and recoded by two authors in collaboration. Thirty-four posts were excluded during recoding, leaving 2,269 posts for further analysis. Of these, 14 posts did not reach majority on any codes but have been kept in Table 2 as they were not flagged for review during coding. A flow chart of post inclusions and exclusion can be found in Supplementary Figure 2.
---
Quantitative analysis
Eleven general themes were identified and are described in Table 3. The posts were coded by three authors, resulting in high inter-rater reliability (AC 1 ranging 0.77-1.00, adjusted p < 0.001, Supplementary Table 1).
The occurrence of various themes is illustrated in Figure 1 for all accounts and in Supplementary Figure 3 for medicine-and nursing-associated accounts separately. Most posts were related to work, either alone (n = 699) or in combination with patients (n = 422) or private life (n = 183). In total, 491 posts were patient-related (Figure 2). Of these, 116 posts were regarded as offensive (24%), 148 posts (30%) as depicting vulnerable patients, and 67 posts (14%) as an offensive depiction of a vulnerable patient. There were significant correlations between some themes (Supplementary Figure 4): Accounts posting about work tended to post about vulnerable patients and patients in general, but not in-jokes or about student life or exams. Accounts posting mostly B likes and Bcomment refer to standardized regression coefficients estimated from linear mixed models (see text for details) and positive coefficients reflect the theme being associated with an increase in the number of reactions. Confidence intervals (95% CI) are estimated from bootstrapping and robust effects (95% CI not containing zero) are indicated in bold. about student life, on the other hand, tended to post in-jokes and about exams, but less about patients or work.
Accounts related to medicine or nursing showed significant differences in the number of posts related to several themes. The relative occurrence of various themes is shown in Figure 3. Posts from medicine-associated accounts were more often about exams (p < 0.05), student life (p < 0.05) or in-jokes (p < 0.01). In contrast, posts from nursing-associated accounts were more frequently related to work (p < 0.05), private life (p < 0.01) or patients, both vulnerable (p < 0.05) and in general (p < 0.01).
Overall, theme had only minor effect on the number of reactions as shown by the regression coefficients given in Table 3 and shown by profession in Figure 4, with some exceptions. The strongest effect was seen for advertisements that had a clear tendency to have more comments than other posts. However, due to a low number of such posts, the effect was not robust to bootstrapping and was only estimable for nursing-associated accounts (Figure 4). Posts containing in-jokes or relating to exams/tests also tended to have more comments, but the effect was much weaker than for advertisements (Table 3). Although the effect of theme on number of likes was overall weak (<1 standard deviation compared to control posts), posts with injokes or relating to vulnerable patients tended to have more likes than other posts. In contrast, posts depicting academic concepts tended to receive fewer likes. When assessed by profession, a similar pattern emerged (Figure 4). In medicine-associated accounts, posts about work or coded as offensive tended to receive fewer likes. In contrast, posts about work received more comments in both medicine-and nursing associated accounts and some more likes in nursing-related accounts. There was a tendency for patient-related posts to receive more likes and comments in nursing-associated accounts, whereas this effect was absent for medicine-associated accounts.
---
Qualitative analysis 3.3.1. The depiction of patients: How and who
The 30 selected posts showed a rich variation in graphical techniques and the use of symbols. The largest portion of posts contained cartoons or snapshots, either pictures or videoclips, from popular culture (e.g., scenes from TV series) with added explanatory text and captions. Few posts depicted actual situations involving healthcare. Instead, patients and healthcare workers were typically represented by other characters, using text captions to convey the setting and the roles. Both patients and healthcare workers were sometimes depicted as animals. There were, however, examples of what may have been authentic patients, in ambulance or in hospital, and a photo taken inside a Norwegian healthcare institution (the photo did not show any patients or sensitive information). Some posts made use of more advanced symbolism, such as the trojan horse. Some groups of patients were repeatedly depicted in the 30 selected posts. These were typically vulnerable patients such as confused or fragile elderly patients, patients suffering from psychosis or delirium, and drug-affected or agitated patients. The Number of posts related to common themes. The total number of posts related to each theme is shown to the left, whereas the upper bar plot shows the intersections between various themes (e.g., 343 posts were related to both work and patient). Only intersections with ≥5 posts are shown.
healthcare worker was often anonymous, and profession and position were typically not stated explicitly.
The point of view varied between posts. Often, the character representing a healthcare worker was marked with personal pronouns such as "me", "I", or "you". In others, we observed the situation as an unnamed third party. Another common configuration was a photo or video representing the patient's response to an action, captioned "every time you [do something to the patient]". The patient was referred to as "me" in only one of the 30 selected posts.
---
Thematic analysis: Main themes
Three overarching and recurring themes emerged during the analysis of posts considered the most or least offensive. Below we present main themes and related subthemes from the thematic analysis with illustrative examples demonstrating how the themes manifest themselves in distinct ways in posts considered offensive when compared to posts considered innocent.
---
Trust and the breach thereof
Many posts involved some form of breach of trust. This was thematized in various and diverse ways, often in the shape of deception: healthcare workers lying, omitting, pretending. Among the most offensive posts, this theme was frequently connected to administrating medications, typically antipsychotics or sedatives. An illustrative example: a healthcare worker saying "I am just flushing your venous catheter" whereas the syringes are clearly marked with antipsychotics. In one such post, the healthcare worker additionally calms the patient by, falsely, saying "Yes, it is only salt water". Another form of pretending was demonstrated by a slow Number of patient-related posts coded as vulnerable or offensive. The total number of posts related to each theme is shown to the left, whereas the upper bar plot shows the intersections between various themes (e.g., 67 posts were relating to vulnerable patients and considered offensive).
code scenario where an elderly patient receives incomplete and superficial chest compression from a healthcare worker while the relatives are crying in the background. Some of the more innocent posts also touched upon forms of deception, such as concealing feelings in front of the patient or pretending to be working while hiding from tiresome patients or relatives.
Dealing with unprofessional thoughts, fantasies and feelings related to patients was considered a distinct aspect of managing trust as a healthcare worker. This spanned from expressed desire to hurt and punish patients for being difficult and enjoying that they struggle to frustration over patients, not prioritizing what is best for the patient, and looking at patients' bodies with un-caring eyes.
---
Difficulties and discomfort at work
Almost all the discussed posts depicted situations at work that involved some form of difficulty or discomfort. In contrast to posts considered offensive, innocent posts typically revolved around challenges encountered at work as a healthcare professional and with patients having passive roles such as observers or extras or were just referred to. Examples include doing heavy lifting alone, hiding from and avoiding patients, feeling incompetent or as an imposter, and struggling with a task in front of a patient. One post stood out as more confession-like than a meme: a healthcare worker described being sexually assaulted by a patient and, subsequently, laughed at by colleagues when searching support. In posts considered offensive, on the other hand, the patients were often portrayed as the direct cause to the discomfort or challenge. A post considered offensive depicted a healthcare worker entering a patient's room where the patient is lying exhausted on the floor with hands covered by feces, which have also been smeared onto the walls. A text caption informs that the patient had previously refused to receive assistance.
Many posts thematized how difficulties at work were solved in less-than-optimal ways, often involving breach of trust as described above. Uncooperative patients and patients using long time to perform basic tasks -delaying or creating "difficulties" for the healthcare worker -tended to be met with frustration, anger, force, and deceit.
---
The comedy of everyday life as healthcare professionals
Another distinct theme emerged from work-situated posts that did not involve discomfort or difficulties but rather focusing on absurdity or surprise. A subgroup of the posts that were considered innocent which depicted small, everyday incidents such as a patient being wakened by the alarm of an infusion pump, a healthcare worker telling the same joke to multiple patients, or a healthcare Effect of theme on the number of (A) likes and (B,C) comments. The regression coefficients are from linear mixed models and represent the deviation from the mean in terms of standard deviations of nearby posts without the specified theme. The distribution shows the robustness of the estimates as calculated by bootstrap validation. The dotted lines indicate no effect. Some themes were separated (C) to avoid skewing of scale, as indicated by arrows in panel (B). For medicine-associated accounts, the number of posts related to advertisements (4 posts) and vulnerable patients (1 post) were too low to yield interpretable estimates.
worker accidently making noises when checking up on a sleeping patient. These posts often implied deep compassion for the patient or an unspoken alliance between patient and healthcare worker. For example, several posts showed the administration of medicine where the dosage is far too low to sufficiently help the patient. This was, however, framed as the fault of an absent doctor, leaving both the depicted healthcare worker and patient in shared helplessness.
In the offensive group there were posts where the comedy was entirely on the patient's behalf, such as psychotic patients doing or saying allegedly strange, ridiculous, or stupid things or patient's angry responses to naloxone (an antidote to opioids). These posts were considered more malign. An interesting contrast was the depiction of an elderly patient happily and eagerly folding hospital towels. Despite this being humor on the patient's behalf, it was perceived as more compassionate than ridicule and was part of the group of posts considered innocent.
---
Humor based on whose pain?
Systematic differences emerged between posts considered as offensive or not, regarding whose expense the post's humor was based. In many of the offensive posts the patients were subject to an action by a healthcare worker. Consequently, the humor was at the expense of the patient and the patients' vulnerability was an important part of the humorous element of the post. This is exemplified by the repeated theme of deceitful administration of medication to patients, often depicted as either psychotic or demented. In the innocent posts, on the other hand, the patients were not negatively affected by the actions of the healthcare worker, and the patients were mostly supporting characters in the situations depicted. Here, the "pain" was clearly at the expense of the healthcare worker. However, the focused discussions revealed that these differences were not always obvious. For example, some of the posts considered to be offensive and involving pain on the patient's expense could be interpreted as displays of the powerand helplessness healthcare workers may experience when facing specific patients.
---
Discussion
Despite growing concerns regarding e-professionalism among healthcare students and professionals, the contents of social media humor from these groups have evaded systematic characterization. To fill this gap, we employed a mixed methods approach to map important themes both quantitatively and qualitatively. The examined memes showed diverse, yet characteristic, forms of humorous contents and clear differences were found between professions. While nursing-associated accounts had large audiences and focused on themes related to work-life, the medicine-associated accounts had smaller outreach and focused on student-life. Theme had only minor effects on the number of reactions and comments. The most offensive posts included vulnerable patients such as elderly patients and people with mental disorders or drugaddictions, whereas the least offensive posts thematized challenges as a health-care professional and the comedy of everyday life. Although the patient-related content comprised only a minor subset of the material, many problematic examples were found, and those regarded as most offensive were found to jeopardize the trust between patients and healthcare professionals. It should, however, be noted that none of the included posts broke the duty of patient confidentiality or were found so problematic that further steps were considered.
The accounts belonging to the different professions (medicine and nursing) were clearly targeting distinct audiences: the nursing-associated accounts targeted mainly nurses in working positions whereas the medicine-associated accounts targeted student populations. This notion is supported by the medicineassociated account names often referring to universities. It is possible that the shorter duration of the nursing education, with frequent separation into internships at various places, leaves less room for a meme culture to form. The relatively small subset of student-targeted nursing accounts have, however, caused ethical concerns (9). Another possible explanation is that the number of working nurses (about 50,000, excluding midwives and specialist nurses (26)) is larger than the number of nursing student (about 5,000 students (27)). The relative lack of medicineassociated memes from working physicians may reflect professional maturation during the study or that other platforms or private accounts are used. Shedding light on the "hidden curriculum" has been recognized as an important step to fully integrate professional identity formation as part of healthcare educations (18) and our findings suggest that refining e-professionalism cannot be a process isolated to educational institutions but must include professional bodies reaching healthcare practitioners as well.
The professional tension accompanying social media has manifested itself during the last decade, and along with it the discussion of how healthcare professionals should conduct themselves on such platforms, so-called e-professionalism. One extreme approach to this may be to conclude that all public online depictions of patients produced by healthcare professionals are dubious. Being or feeling seen, exposed, looked at, or deprecated by others are central components of shame (28,29), and reminding the patient that one is constantly observed, evaluated, thought about, and discussed may induce self-consciousness and perhaps evoke both shame and a sense of betrayal or alienation -especially if one is negatively portrayed or the perspective conflicts with one's own experiences. We found several examples of this, such as healthcare professionals experiencing discomfort when meeting or observing a patient or finding a patient laughable in appearance or behavior. Depriving patients the control over how they are imagined, portrayed, and spoken about may add to their powerlessness in face of a healthcare system where their social and bodily control has, often, already been weakened. Trust is one of the pillars of professionality (18) and healthcare professionals are obliged to guard patient integrity in all situations and this commitment conflicts with the creation of humorous memes. This view invites students of healthcare professions to reflect upon reasons to why collapses in (e-) professionalism may occur and why one might be tempted to expose or ridicule a patient. In addition, one of the expressed concerns relating to the social media memes has been the possible normalization of problematic attitudes among students. The memes can become memorable and influential parts of the so called "hidden curriculum" of healthcare education (7) which is now recognized as an integral part of how professionalism develops (18). The repeated exposure of vulnerable patient groups, such as patients suffering from dementia or psychiatric or addiction disorders, that was identified in the current study may contribute to an "othering process" similar to what have been seen during the COVID-19 pandemic (30). Another possible route of harm is that "these memes can distort our senses, blunting our abilities to detect human vulnerability and, in so doing, poison the relational ethics of our practice" (8). These concerns are, however, not unique to medical memes and pertain to all use of humor in healthcare settings (16).
A contrasting view may be that the production of humorous memes are important forms of self-expression that, if they manage to maintain patient confidentiality, are creative ways to identify, communicate, and cope with problems and challenges arising in professional life. Creative artmaking is an effective way to explore issues related to professional development and visual arts offer distinct benefits compared to verbal reflection (31). Patients are not to be infantilized but should be respectfully treated as ordinary people, which may include that unflattering behavior is commented and pointed out -not as an act of humiliation but to help refine patients' ability to mentalize and know how they appear to others. Thus, the memes can possibly serve honorable causes, including as educational tool or as a way to cope or vent (7,8,10). The empowering and positive potential of healthcareassociated memes is illustrated by memes produced by or for patients [e.g., (32)(33)(34)(35)(36)(37)]. This view invites students of healthcare professions to explore how humor and social media can be used in constructive ways to raise awareness about challenges encountered at work and as an alternative and casual way of communicating with (specific groups of) patients. The fact that most of the memes analyzed by us relate to work or student life -and often frustrating sides of these, such as work-spare time conflicts or exams -suggests that the memes are primarily a way to vent. Especially, the memes can be used as vehicle to communicate experiences that are not easily shared otherwise, such as shame (38), the embarrassment from making mistakes (7) or being disempowered (10). These are common yet painful and vulnerable experiences among healthcare workers that may be eased by establishing them as shared experiences that can be joked about. Thus, educators may seek to "help students and trainees to find an authentic voice, based at least in part on the profession's ideals, that works in both medical and non-medical life-worlds" (39, 40) so that the memes can remain useful while adhering to professional standards. The Medical Education e-Professionalism (MEeP) framework is a research-based attempt to define core competencies for healthcare professionals in relation to digital space (41). Here, developing professionality involves recognizing the mission and social contract of the medical profession, and specific competencies are described along the axes of professional values, behaviors, and identity formation. The framework has been shown useful to guide implementation of e-professionalism education (42).
The qualitative analysis revealed that problematic posts often depict conflicts between normative and descriptive ways of providing healthcare services. Although all healthcare professionals are trained to know the importance of patient respect, confidentiality, and trust, one might find oneself in situations where the highest professional standards cannot be met due to organizational (e.g., high workload or understaffing) or personal (e.g., inexperience, anger, or frustration) reasons and where techniques such as deceit are found necessary. These illustrations may have educational value that can enlighten healthcare professionals and administrators about unpleasant pragmatism arising from how the services are organized. From a patient perspective, however, the unpleasant pragmatism may lower the public's trust in the healthcare services. Nevertheless, healthcare professionals must reconcile human imperfections and organizational limitations with the demands of professionalism, and keeping patient-directed humor at spatial and temporal distance from patients -such as between colleagues in the lunch room -has being suggested as an acceptable but controversial solution (16,43). With online social media, however, spatial and temporal distance collapses and the borders between private and public are blurred (1). All the Instagram accounts included in this study were public accounts, accessible for everyone. For some, deciding to create a public rather than a private profile (where access must be granted manually) may have been a rushed decision not given much thought. For others, however, the meme accounts provide a platform to reach tens of thousands every day. Although we found few advertisements in our material, the potential for economic gain adds yet another ethical dimension to the online presence of healthcare professionals. In contrast to the collapse of temporal and spatial distance, the memes commonly preserve a social distance by using medical terminology, requiring detailed medical knowledge to "get it" or by referring to situations unique to healthcare professionals. It is likely that this exclusiveness makes the memes able to strengthen the sense of group identity among healthcare professionals (7). One may also argue that this social distance mitigates the potential for harm as it makes the contents of the memes less accessible and understandable for people outside healthcare professions. The official presence of governmental bodies and healthcare institutions on the same platform -possibly serving contents side-by-side the anonymous accounts -is yet another example of unclear borders that may give the memes unwarranted legitimacy.
Overall, this study has demonstrated that patients play a peripheral role in the healthcare-associated social media memes but, unfortunately, close to 5% of the included memes were regarded as offensive. The characteristic features of these offensive memes were intentionally deceptive practices, which may have been deemed necessary at the time, mainly in the form of administering medications, as well as unflattering depictions of often vulnerable patient populations. Future studies are, however, necessary to investigate the concordance between the opinions of fourth year medical students, as in this study, actual patients, and experienced heath care professionals. The rapid development of new social media platforms where the borders between private and public are progressively dissolved and where algorithms select for increasingly shocking or eyebrow-raising contents, urges for further research to enable educational institutions to deal with these aspects of e-professionalism. The diversity revealed by the current study makes an open-minded approach necessary rather than abrupt condemnation. We hope that our findings can support nuanced reflections regarding positive and negative sides of healthcare-associated memes through empiric knowledge and guide the continuous refinement of e-professionalism in healthcare so that space can be found for the human sides of both patients and professionals.
---
Strengths and limitations
This study is, to our knowledge, the first broad and systematic characterization of social media memes produced by healthcare students and professionals. Norway is a country with a population who possess excellent digital skills and have wide access to social media (19,44,45), suggesting that both creators and the audience of the included memes are likely to be diverse and representative for a wider population. The combination of quantitative and qualitative methods enabled both broad and deep characterization of the memes. However, the approach involves important limitations. Although the study aimed to characterize the content of medical memes in an objective manner, the group of coders was small and homogenous (all medical students, both genders were represented) which could have influenced the results. To ensure consistency and trustworthiness of our results, the supervision and active participation of two senior researchers, both with experience from qualitative research and either clinical work or medical ethics, was necessary. Nevertheless, both the quantitative coding of posts and the thematic analysis involved subjective judgment. For example, the classification of posts as offensive or not revealed significant differences between coders. However, interrater agreement was found to be satisfactory, and the subjectivity of the general coding was further mitigated by removing codes lacking majority support. Humor is inherently subjective and individual, and shaped by factors such as culture, age, sex, and experience. It is therefore likely that medical students' view on what is offensive or not that may differ from other groups, and it would have been interesting to include coders with other backgrounds, such as patients or experienced clinicians, to get a more diverse point of view. This is also the case in the thematic analysis, where it would have been interesting to involve a more heterogenous group in the discussion of the selected memes. Finally, the focused discussion only involved a selection of the posts and may thus have missed themes that were present in the larger material.
---
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
---
Ethics statement
The study was approved by the Norwegian Centre for Research Data (NSD, reference number 128255) and the included accounts were notified and received written information about the study in line with privacy regulations.
---
Author contributions
BM and BS provided supervision. ST and BCS developed the codes to thematically classify posts and MR, ABJ, and AHJ validated it. MR, AHJ, ST, ABJ, and BCS conducted the coding of the posts. AHJ and MR re-coded posts marked for review. ST, BCS, MR, EU, and AHJ rated posts for offensiveness. ST, BCS, AHJ, MR, ABJ, BM, and BS participated in focused discussions to qualitatively assess selected posts. AHJ conducted statistical analyses and wrote the first draft of the manuscript. All authors contributed to the design and conceptualization of the study, contributed significantly to the submitted work, and reviewed and approved the manuscript.
---
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
---
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
---
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2023. 1069945/full#supplementary-material |
Mempawah Regency is a Regency with high cases of stunting, the government's efforts to reduce stunting rates by providing nutritious food assistance to babies at risk of stunting. Stunting is not only a matter of nutrition, but the causes of stunting are complex. This study aims to describe the existence of sociocultural linkages as a cause of stunting. The research method used descriptive qualitative approaches; the informants consisted of 7 people from several government groups, health workers, traditional shops, and communities whose children were stunted; the technique for determining informants used a purposive technique, and the data collection techniques were interviews, observation, and documentation. The results of the study indicate that there is a relationship between low educations, many early marriages. | Introduction
Stunting is characterized by obstacles to growth (child height) and development, resulting in a child being shorter than other children. (Harper et al., 2023), a child can be said to be stunted if the child is born with a height of less than 48 cm and a weight of less than 2.5 kg. Then this condition is monitored for up to 4-12 months; if there is no change, the child is said to be stunted (Rahayu et al., 2018) stunting a significant public health problem in Indonesia, with a prevalence of around 37% (Beal et al., 2018). Stunting is a national health problem. Therefore the government stipulates stunting as part of the federal development strategy program by establishing an accelerated stunting prevention program which is structured based on studies on the implementation of accelerated nutrition improvements and seeing the success of other countries in preventing stunting, the regulation of this program is strengthened by presidential regulation No. 72 of the year 2021. Through the national strategy program to accelerate stunting prevention, the central government intervenes in districts in Indonesia to
---
International Journal of Multidisciplinary Approach Research and Science
jointly succeed in this national strategy program so that the stunting rate drops to 14% in 2024, the stunting rate in Indonesia in 2022 experienced a decrease of 21.6% (Ministry of State Secretariat of the Republic of Indonesia., 2023), even though it has decreased, efforts must still be made to reduce the national stunting rate more drastically. Stunting concerns a child's height and the quality of Indonesia's human resources towards Golden Indonesia 2045.
West Kalimantan is one of the 12 provinces that has the highest stunting prevalence (8th out of 12 provinces) and the highest number of stunted children under five; the stunting prevalence rate in West Kalimantan in 2022 is 29.8%; this figure is included in the high category if referring to WHO, the range is between 20-30% and the local government has a big task to reduce the stunting rate to at least 17% in 2023. One of the efforts that can be made is to prevent this. Mempawah Regency is one of the regencies in West Kalimantan; this Regency has a stunting rate of 25.1% in 2022. The district government is committed to reducing stunting rates in 2023 and 2024. Various efforts have been made to make this happen, especially one of which is the establishment family assistance team in every village that assists the government in preventing stunting (databooks. metadata.co.id, 2023).
Government efforts to reduce stunting include nutrition fulfillment, outreach to prospective brides, pregnant women, mothers with children under two years, and mothers with toddlers (24-59) months by conducting routine monitoring (Rinto, 2022). The problem of stunting is not only about fulfilling nutrition and height but is a complex problem that needs to be studied in more depth through various studies from various perspectives. Factors that influence stunting include the condition of the baby, such as; not breastfeeding for six months, low socioeconomic status of the household, premature birth, height, and low mother's education are factors that have contributed to the occurrence of stunting, apart from that the condition of the community also influences stunting inadequate access to health and geographic location (Beal et al., 2018).
The causes of stunting in Indonesia are multifactorial, including inappropriate complementary feeding practices, exposure to viruses, poor breastfeeding habits, inadequate maternal nutrition, and regional determinants such as poor water quality and sanitation, health services, the food system, and education (Putro et al., 2023). These factors contribute to the high prevalence of stunting in Indonesia and emphasize the need for interventions that address the immediate and underlying causes Understanding the factors contributing to stunting is critical to developing effective prevention strategies because stunting is not solely the result of malnutrition but is also influenced by education and social structure; the modern view of growth regulation emphasizes socio-economic-political factors that contribute to stunting (Mahriani et al., 2022). Based on this research meaning that to address the reduction and prevention of stunting, it is necessary to consider a holistic approach that goes beyond health parameters and addresses the sociocultural context in which stunting occurs.
---
Literatur Review
Several previous studies stated that stunting occurs as a result of a complex interaction of various determinant factors, including socioeconomic and cultural influences (Mediani, 2020), other researchers argue that politics and the economy, as well as society and culture, are one of the factors that influence the occurrence of stunting (Wicaksono et al., 2022), but the study did not provide specific studies and details about these claims. This shows a gap in research that focuses on understanding certain cultural factors that contribute to stunting because stunting results from complex interactions of household, social, environmental, and economic (Bahrun & Wildan, 2022).
Based on this explanation, this study aims to examine the sociocultural factors in shaping stunting in Tanjung Village, Mempawah Hilir District, and Mempawah Regency more deeply because understanding social and cultural factors can help develop effective interventions for preventing stunting (Onis & Branca, 2016).
---
Research Method
This study used a qualitative research method with a descriptive approach; the research location was in Tanjung Village, Mempawah Hilir District, Mempawah Regency. In this study, seven informants from elements of the government, health workers, community leaders, and community representatives were stunted. The informant determination technique uses a purposive sampling technique. The data collection technique uses interview, observation, and documentation techniques; the data sources used are primary and secondary data validation techniques using triangulation of sources.
---
Result and Discussion
Tanjung district is one of the sub-districts that have a high number of stunting cases. The following is the data that the author obtained from the Mempawah Hilir District Health Center regarding the number of stunting cases in the Tanjung Village:
---
International Journal of Multidisciplinary Approach Research and Science
Table 1 shows that the number of stunting in Tanjung Subdistrict in June 2022 was 28 cases of stunting. If no prevention efforts and solutions are provided, the chance of stunting with higher numbers will occur. According to the head of the Mempawah Hilir District Health Center, Tanjung Subdistrict, the number of stunting cases is relatively high. Following are the results of the author's interview with informants: "Cases of stunting in Tanjung Village are quite high after Kampung Pasir Village. Nevertheless, we do not just rely on numbers or numbers; for cases of stunting, we need prevention so that stunting does not happen to the people of Mempawah Hilir Subdistrict." (Interview, 10 August 2022).
Case of stunting If the stunting rate reduction is not accelerated, stunting cases will snowball. Indirectly, stunting will cause new social problems to arise and impact elements of people's lives. The resulting impact on children who are stunted in health will affect the growth of the brain and nerves of the body, and stunting will have an impact on social problems such as poverty, education, and the economy (Rahmadhita, 2020;Saadah et al., 2022).
Suppose a child with stunting is not treated, and the number is increasing. In that case, a child cannot study well and absorb lessons to the fullest because balanced nutrition in a child's brain will make the child's ability to think and digest much better. In many countries, stunting is also related to children's low cognitive abilities and performance in school. Stunting affects learning capacity at school age, school grades and achievement, wages for work as an adult, risk of chronic diseases such as diabetes, morbidity and mortality, and even economic productivity (Chowdhury et al., 2021;Sadler et al., 2022;Windasari et al., 2020).
One of the things that this research wants to discuss is how social and cultural influences cause stunting in Tanjung Village. Based on the results of stakeholder interviews, the social and cultural environment has an influence that causes stunting cases. The social and cultural environment, namely the level of community education, the existence of early marriage, and some people still believe in traditional healers and myths in pregnancy and feeding children.
---
Education
Education is essential in improving the quality of human resources. Because of that, education is also a reference or benchmark in viewing or considering a phenomenon. In the context of health, people are expected to have extensive knowledge and be able to think critically so that they have the knowledge to guide their daily activities. The total population in Tanjung Sub-District in 2020 is 1227, consisting of 614 men and 613 women, have different educational backgrounds and the education level is as follows;
---
Educational Level of Tanjung Districts
Figure 1 shows that most of the population does not/have not attended school. Namely, as many as 391 people, 212 people have not finished elementary school, and 271 people have graduated from elementary school. Of those who continue junior high school/junior high school to graduate, there are 130 people, 160 people have graduated from high school, 6 D1/D2 people, 11 people graduated from D3, and 46 people from S1. There are no residents who have continued to master's and doctoral degrees.
In 2020 there were 418 school-age residents aged 3-22 years(data.kalbarprov.go.id, 2021). However, the number of residents with student status is only 184, meaning that many school-age children do not continue their studies. The explanation from the head of Tanjung Lurah namely justifies this assumption;
"Residents of Tanjung Village, some are already literate in education, but there are still very many who are still at a low level of education. So their knowledge about stunting is also limited, and it is difficult to accept this understanding" (Kelurahan Head, interview, 10 August 2022).
The level of education also influences the formation of stunting (Husnaniyah et al., 2020); this was also confirmed by the head of the Mempawah Hilir District Health Center, who stated that if the educational characteristics of the community were still relatively low, it would affect the childcare mechanism. Following are the results of the author's interview with the head of the Mempawah Hilir District Health Center:
"Regarding education in the Cape area is lacking. The average income and education of the downstream Mempawah community are in the middle and lower percentages. So for the issue of stunting and their knowledge of stunting is also lacking."(interview on 10 August 2022)
The author sees that the average child who is stunted comes from parents who have not graduated from elementary school or have graduated from elementary school and come from low-income families. The occurrence of stunting in this family is caused by parents' knowledge that is not extensive and thinks it is only limited to eating, without thinking about nutritional content because limited income no longer pays attention to the adequacy of the intake that their children eat, enough to make them complete and cheap (Nirmalasari, 2020).
Low education has an impact on preventing stunting, or it can also be said to be an obstacle to handling stunting, one of which is the level of education (Husnaniyah et al., 2020;Ramdhani et al., 2021;Wardana & Astuti, 2020). Even though socialization has been done, the Tanjung sub-district community needs help understanding stunting, its causes, and how to overcome it. This provides homework for the village government, health center, and posyandu to find alternative new effective ways to provide explanations to the community so that they can easily understand and put them into practice.
Parents with low education are 2.22 times more likely to have stunted children than parents with higher education (Hizni et al., 2010). A study on early childhood in Bangladesh found that children of mothers who had completed senior secondary education (SMA) had a lower risk of stunting compared to children of mothers who were uneducated, as well as children of fathers who had completed secondary education as well. have a lower risk of stunting compared to children whose fathers are not educated (Chowdhury et al., 2021). Education is not only crucial for a mother; a father figure requires education so that there is
---
International Journal of Multidisciplinary Approach Research and Science
balance in a family if both understand and have concern for children's nutrition.
Low education also impacts the community's economy; 80% of the people of Tanjung Kelurahan have livelihoods as farmers with the understanding that they are at a middle economic level and even tend to be lower economic (poor). This poverty also influences increasing stunting rates because people cannot afford to buy nutritious food and pay attention to the growth and development of children to the fullest (Ernawati, 2020). Research in Yogyakarta suggests that there is a correlation between wealth and the risk of stunting; the more affluent a family is, the lower the risk of experiencing stunting (Gustina et al., 2020); based on this, it can be agreed that education is indeed essential to prevent stunting and improve human resources. Preventing and dealing with stunting is, of course, not only with the help of providing nutritious food but also needs to pay attention to the educational aspect of the community because if the level of education is low, then human resources will also be low, the economy and health will also be low because these circles are interrelated like a cycle. (Rahmadhita, 2020).
---
Early-age marriage
Early marriage is a marriage contract that takes place at an underage age because this is not following the provisions of the Marriage Law No. 16 of 2019, which states that marriage in Indonesia is only permitted if a man and a woman have reached the age of 19 (Law Number 16, 2019).
West Kalimantan is the province with the highest number of cases of early marriage in 2021, namely 21%; this figure is above the national average of 10.3%. (Central Bureau of Statistics, 2021). The regencies that have applied the most for dispensation from marrying underage are Malawi, Sintang, Sambas, and Ketapang regencies (Kiwi, 2023), submission of early marriage dispensation is a requirement for children under 19 years of age who want to marry, and this is the authority of the Religious Courts. There are also many in the practice of early marriage in Mempawah Regency. However, most early marriages in Mempawah Regency, especially in Tanjung Village, do not use state (official) marriage methods, so few apply for dispensation for underage marriage. They marry underhanded (marrying according to religion only).
Early marriages in Tanjung Village influence stunting in children; children who are not mentally ready to marry and raise children impact children's health and development. This was conveyed by the midwife on duty in the Tanjung Village "In the Tanjung Village area, children are often married under the age determined by the government or based on the law. So that at an immature age, education is also not completed so that it affects the problem of stunting." (interview on 10 August 2022)
The Tanjung village head said the same thing, in which case he confirmed that the stunting problem could have started from marriage. The following are the results of the author's interview with the Tanjung village head:
"Apart from education, stunting in the Tanjung Village area is caused by marriage; this marriage occurs to those whose age is not according to the rules or what we usually call early marriage."(interview on 10 August 2022)
Early marriage is the cause of stunting in the Tanjung Village community because, at their age, they are immature in making decisions, thinking, and understanding parenting patterns for children; besides that, this also affects the reproductive role in children because their reproductive organs are immature when a woman marries under at the age of 18 years, the uterine organs have not yet been fully formed so that they have a high risk for fetal development and can cause death in the baby.
World Health Organization (WHO) stated that one of the reasons for the high stunting cases in Indonesia was early marriage (Muldiasman et al., 2018). Research in Jakarta found a relationship between early marriage and the incidence of stunting in children aged 24-59 months; the results showed that the age of the mother at marriage had an impact on the risk of stunting in her children (Restiana & Fadilah, 2023).
Cases of early marriage in the Kelurahan Tanjung, Mempawah Hilir District, are caused by several factors. First, there is an arranged marriage carried out by parents, and second, the culture adhered to by one ethnic group. Based on the results of interviews with the Tanjung Lurah and village midwives: "Parents' willingness to marry off their children at an early age is one of the causes of early child marriage. Besides that, there is a culture that is adhered to by one ethnic group that thinks it is normal for their ethnic group to marry early." (interview on 10 August 2022)
With the rise of early marriages in the Mempawah Regency area, the Regional Indonesian Child Protection Commission (KPAID), together with the Office of Social Affairs, Women's Protection, Child Protection and Pemdes (DSPPPA-Pemdes) and the Mempawah Religious Court (PA) signed a Memorandum of Understanding (MoU) regarding the agreement dispensation of child marriage (Ardiansyah, 2022). The existence of the MoU or the policy is made so that children do not commit adultery, so the government makes a dispensation for marriage age. However, dispensing the marriage age can lead to new, more serious social problems, one of which is stunting.
---
Believe in Myths
The myth that developed in society turned out to be one of the causes of stunting. Myths are often associated with certain legends, stories, or stories with mystical or mysterious nuances (Nasrimi, 2021). Myths are also ambiguous and have many meanings, there are no permanent myths, but almost all myths are flexible; stories in myths mostly adapt to new knowledge and changes in the human environment.
Some people in Kelurahan Tanjung still believe in myths about pregnancy, birth, and feeding babies, which are not scientifically proven. People's belief in myths creates obstacles for people to obtain good nutrition knowledge. The myth that exists in society indirectly descends or comes from families who socialize it so that the myth develops and survives.
Based on the results of interviews, not all of the traditional leaders of the Tanjung Village believe in myths, but some of those whose children are stunted still believe in pregnancy myths and myths about parenting.
"Not all people believe in myths, but not none. Some still believe in myths during pregnancy and when raising children, so that nutritional information or scientific facts are not absorbed properly." (Interview, 10 August 2022) Some of the myths that are believed by some people and developed in society for generations include;
---
International Journal of Multidisciplinary Approach Research and Science
No Evolving Myth 1 Do not drink iced water during pregnancy 2 Do not wrap a towel or cloth around the neck, which will cause the child to be entangled with the umbilical cord 3
Drink more water tofu so that the children born are white 4 Drink more coconut water 5
Do not eat spicy food during pregnancy so that the child does not get sick 6
If you are pregnant, when you go out, you have to bring some sharp tools such as safety pins, scissors so that you are protected from supernatural beings 7
If not allowed to eat pineapple, durian, and tape later, a miscarriage will occur 8
If the child cannot eat ginger water later, the child will be black 9 Some types of vegetables are not allowed because they will interfere with the development of the fetus.
Source: Processed by Authors, 2022
The myths that are circulating are considered as a form of supervision carried out by the community, this is done so that the womb of pregnant women is maintained, but there are not many true or false assumptions that cause pregnant women to reduce their nutrition, even though this is scientifically permitted and has no effect. Significant for the content. In addition to the myths during pregnancy, there are also myths circulating in the parenting process;
No Parenting Myths 1 Newborn children are given to eat bananas 2
Children do not need to be immunized later on. Adding sick children 3
For children who are sick, most of them choose traditional medicine, which is not necessarily scientifically correct.
Source: Processed by Authors, 2022
---
Parenting
Parenting is essential when we have children; it includes how parents care for their children, namely how parents treat and educate, guide children, and provide important information. Ideally, the child's parents provide complete care to children. The impact of early marriages that occurred in Tanjung Village influenced parenting. Most of the children who were born were cared for by their grandmothers because the parents of the children worked or because they were young and did not know how to care for babies, so the children were cared for by their grandmothers and based on the results of interviews with the head of the Mempawah Hilir District Health Center said that some of them, namely the child's parents, did not provide direct care.
"Most of the children are cared for by their grandmothers or relatives because they work, so they are cared for by their families. Moreover, this is one of the obstacles because it is difficult to provide an understanding of nutrition or appropriate parenting." (Interview on 10 August 2022).
The grandmother does not know about stunting, malnutrition, what food is good for babies, and the like; the grandmother only feeds the children according to what the grandmother and her family eat daily. Parenting like this can hinder the handling of stunting and its prevention because, even though routine stunting socialization is given, those present at socialization activities are not the child's parents but the child's grandmother, grandfather, aunt, so it is not uncommon for the results of socialization not to be practiced in everyday life. Children's Day because the nannies follow the pattern of the child's grandmother or they do the parenting in their way.
---
Conclusion
Based on the research conducted, it can be concluded that there is an influence of sociocultural sociocultural sociocultural sociocultural linkages on the formation of stunting in children, including due to low education so that there is a lack of knowledge and an impact on a low economic level as well, both early marriages because they are not ready mentally, knowledge and materially so that it will result in parenting patterns of children who are primarily cared for by their grandmothers and finally myths, myths that develop in the community influence the occurrence of stunting, myths that are believed by the community early on and stunting is something new for them, children born short, thin, depending on height to their parents (offspring).
---
Suggestion
If education is low, there are still many early marriages, myths that develop, and wrong parenting patterns continue to develop; if no solution is found, it can hinder stunting prevention and have the potential for new seeds of stunting to occur in Tanjung Village. |
We start (section The COVID-19 Pandemic and Italy's Response to It) by focusing on Italy's "tough" response to COVID-19 pandemic, which included total lockdown with very limited possibility of movement for over 60 million individuals. We analyse (section Sweden's Softer Approach) Sweden's softer approach, which is based on relatively lax measures and tends to safeguard fundamental constitutional rights. We problematise (section General Disagreement Among Experts: A Pressing Epistemic Problem) around the stalemate that arises as a consequence of the implementation of these different approaches, both epistemically grounded and equally justified, in the face of an unknown virus, in society. We point out that in some cases, like the one we discuss here, the epistemic justification that underlies scientific expertise is not enough to direct public debates and that politicians shouldn't exclusively focus on it. We claim that, especially in situations of emergency when experts disagree, decision makers ought to promote broad discussions, with attention to public reason as well as to constitutional rights, in the attempt to find a shared procedural and democratic agreement on how to act. On these grounds (section The Need of More Public Discourse in Fighting Covid-19) we call for an increase role of different types of expertise in public debates thus for the inclusion of ethicists, bioethicists, economists, psychologists, moral and legal philosophers in any scientific committee responsible for taking important decisions for public health, especially during situations like pandemics. Likewise, in the interest of public reason and representativeness, we also claim that it may be fruitful to bring in non-experts, or experts whose expertise is not based solely on "epistemic status," but rather on either experience or political advocacy, of either the homeless, the immigrant, or other disenfranchised groups. This, in expanding the epistemic-expert pool, may also make it "more representative of society as a whole. | THE COVID-19 PANDEMIC AND ITALY'S RESPONSE TO IT
As of September 2020, SARS-CoV-2 -a coronavirus which likely originated in Wuhan, China-that causes COVID-19 -has been ravaging the world (almost 30 million people infected), causing the deaths of almost 1,000,000 people (at the time of writing) 1 .
The virus's etiology is still not-well understood; however, it is known that it propagates quickly among humans by close contact, air currents, by touching contaminated objects or through respiratory droplets produced when an infected patient coughs or sneezes 2 . The virus may cause, in its strongest manifestations, acute respiratory infections that lead to the death of the individual that contracted the virus [estimated mortality rate was 3.4% as of March (1), with significant regional differences 3 ].
The ease of contagion of COVID-19 (on March 11th 2020 the World Health Organization described the Covid-19 situation as a pandemic) and the growing number of deaths (with families being decimated) along with the collapse of ICUs has prompted the authorities to adopt measures (such as generalized reduction of transport and economic activities) to prevent the virus from spreading further. These measures have caused dramatic effects (e.g., freezing of international trade, increase in unemployment, crude oil prices below zero) on the world's economy. Such effects are likely to trigger, despite Governments/Institutions' attempt to inject money into suffering economies 4 , a global recession.
In this context, biomedical experts (such as virologists, epidemiologists, immunologists, public health scholars, and statisticians) have acquired an increasingly central role in public debates. They acquired such a role by virtue of their epistemic authority (2), which loosely speaking depends on established knowledge combined with an education of excellence, success in one's field, academic achievements, recognition by colleagues, and high positions in leading institutions.
Biomedical experts have been elaborating models of contagion, strategies for preventing the virus from spreading further, and offering precious advice to politicians for implementing public health policies devised to safeguard society. In the face of a new, aggressive virus, for which there was no cure, health systems have shown themselves to be remarkably unprepared. As a consequence, the political authorities have had to rely more and more on the experts to try to formulate health policies suitable to contain the pandemic. The public too, confronted with the imminent serious threat, has not shown any of the recent tendencies of mistrust toward science and scientific reasoning recently observed (3).
Two different types of approaches to dealing with the COVID-19 pandemic have, as a result of this process, emerged. One, that is exemplified by Italy (but also shared by most governments in the world at different degrees) of severity and control, based on state-enforced quarantine. The other, exemplified by Sweden (and partly shared, at the outset at least, by countries like the USA and the UK) of relative relaxation, in which quarantine is not implemented for various reasons (economic, constitutional or alleged scientific ones) and relatively lax measures of prevention are deemed to be sufficient to stop the pandemic 5 .
In this section we briefly look at the Italian response to the coronavirus pandemic. Italy's COVID-19 epidemic, which as of July claimed more than 35,000 lives on a population of ∼60 million individuals, exploded in the wealthy and prosperous North, where it put under significant pressure one of Europe's most developed health care systems.
In order to prevent mass contagion throughout the country, which would have caused catastrophic effects in the less prosperous and developed (infrastructurally, at least) South, the Italian government advised by a team of medical experts [known as comitato tecnico scientifico] implemented a series of measures, which involved: (i) restriction on movements; (ii) enforced quarantine; (iii) bans on travel and assemblies; (iv) closing of all stores except essential services, (v) shutting down all municipal borders; (vi) uniformed police and armed soldiers setting up checkpoints around the country.
In accord to the stringency index (which records the strictness of "lockdown style" policies that primarily restrict people's behavior) calculated by the Oxford COVID-19 Government Response Tracker 6 , at mid-March Italy scored 90.48, the most stringent level alongside with Spain. At that time Sweden scored 28.57 and it was among the countries with the least stringent measures in the world. As of mid of July, Italy scored 58.33 and Sweden 38.89.
The harsh measures implemented by the Italian government (∼2 weeks after the first cases were discovered in the country's North) arguably came in too late and did not manage to prevent the surge of cases that has heavily taxed the capacity of an extremely well-regarded health care system. In particular, it is deemed that policy makers should have stressed the message "don't meet anyone" rather than merely "stay at home, " due to the special familiar and relational structure and functioning of Italian society. However, after months of lockdown, the situation in Italy was gradually getting under control and the country-as of July 2020 seemed to have "flattened the curve, " meaning that it successfully managed to slow down the spread of the infection 7 . IC units were readily available and less cases were being discovered. On these grounds, the Italian government ordered a gradual reopening of the country, even though the contagion was not zeroed.
---
SWEDEN'S SOFTER APPROACH
Sweden's COVID-19 pandemic has, as of July 2020, caused the death of almost than 6,000 people on a population of roughly 10 million individuals (Sweden's population is 1/6 of Italy). At the onset of the pandemic, the Swedish government (advised by some of the country's top epidemiologists, such as Prof Anders Tegnell) decided not to enforce lockdown (many businesses, including restaurants and bars stayed open) or to impose strict socialdistancing policies (borders and schools for under-16s were also open). It only implemented a minor set of restrictions (such as banning gatherings of more than 50 people) and relatively lax trust-based measures (such as telling older people to avoid social contact or recommending work from home) to protect and safeguard society.
This was done for two reasons mostly. These are scientific/economical and constitutional. Firstly, Sweden's Public Health Agency -based on findings it gathered across the country 8 deemed that closing-down all businesses would be useless to stop the pandemic because COVID-19 had already reached the country. In addition, the biomedical experts consulted by the government (such as Professor Anders Tegnell) 9 remained adamant that enforced quarantine would be undesirable (for psychiatric, psychological, and physical reasons) and even counterproductive (in terms of the economic repercussions it would have on Swedish economy). Secondly, according to Swedish laws on communicable diseases 10 , it is the citizen -not the Government-that has the responsibility not to spread the disease. These laws tend to defend acquired constitutional rights (such as freedom of movement and freedom of assembly) and because of them quarantine can only be contemplated for people or small areas (such as a school or a hotel) but cannot be legally enforced on larger geographical expanses of land (e.g., regions).
Sweden's less intuitive and more controversial approach can be praised for attempting to safeguard citizens' freedom 11 , which quarantine seems to threaten. However, the potential cost in terms of human lives of this approach has also raised many concerns 12 .
Several researchers 13 have criticized the Agency for Public Health and the experts chosen by the government for not having fully acknowledged the role of asymptomatic carriers. 14 Others 8 https://www.folkhalsomyndigheten.se/contentassets/ 1887947af0524fd8b2c6fa71e0332a87/skattning-avvardplatsbehov-folkhalsomyndigheten.pdf?fbclid= IwAR3Dij1B7jGicxFmRtw7EODymicfo_54W0DoFz6n3Dh7ax9MSte9wnorVF4 (accessed August 2020). 9 https://www.nature.com/articles/d41586-020-01098-x (accessed August 2020). 10 https://www.loc.gov/law/help/health-emergencies/sweden.php (accessed August 2020). 11 https://sverigesradio.se/sida/artikel.aspx?programid=2054&artikel=7463561 (accessed August 2020). 12 https://www.theguardian.com/world/2020/mar/30/catastrophe-swedencoronavirus-stoicism-lockdown-europe (accessed August 2020). 13 A petition was launched by a group of scientists demanding the government to implement stricter measures. The petition was signed by over 2,000 doctors, including the chairman of the Nobel Foundation, Carl-Henrik Heldin. 14 "We're not testing enough, we're not tracking, we're not isolating enough. We have let the virus loose, " Cecilia Söderberg-Naucler, an epidemiologist at the Karolinska Institute, stated. Joacim Rocklöv, a professor of epidemiology and have criticized the increasingly neoliberal turn of the Swedish government, the dismantling of its health infrastructure and its large business orientation (4). Moreover, it is not clear whether this softer approach to the pandemic can really bring about the economic benefits it promises. Recent data have shown that Swedish's economy won't dodge economic hit despite its light touch to the pandemic 15 .
More importantly, despite a relatively recent study (5) suggested that Sweden's limited lockdown measures may have resulted in fewer death than expected, evidence is mounting that the Swedish's approach to curb the COVID-19 pandemic has not been as successful as first thought 16 . Mike Ryan, executive director of WHO's Health Emergencies Program, recently condemned herd immunity as a strategy to deal with the infection: "it can lead to a very brutal arithmetic that does not put people and life and suffering at the center of that equation." 17 Regardless of herd immunity, which clearly has not been achieved (the proportion of Swedes carrying antibodies is still believed to be well below 10%), Swedish death raise has become indeed very problematic. Sweden has a death toll greater than the United States: 564 deaths per million inhabitants compared with 444, as of July 27 18 . Sweden also has a death toll comparable to that of Italy (581) 19 but nearly five times greater than that of the other Nordic countries combined 20 , which seems to suggest that under similar (cultural, geographical, infrastructural) conditions the death toll could have been much lower; hence, that many lives could have been saved if a different approach had been pursued.
However, as data may quickly change again, we ought to preach prudence and avoid drawing sharp conclusions. For this reason, given the evidence available at the time of writing, it seems reasonable to suggest that Swedish's approach needsat minimum-to be redesigned, so as to take into account not just economic parameters but also to protect and defend the lives of Swedish citizens' in the interest of public health. Additionally, even if Sweden's approach would turn out to be better than the competing one (which at the moment seems very unlikely) significant concerns would remain about its possible potential application to other countries, such as Italy. Applying the Swedish approach to Italy (and to many other countries like Italy worldwide), would be quite difficult we believe, and likely result in a massacre for the following reasons. Italy's density is 206 people per Km 2 whereas Swedish density is 1/10 of that, 25 people per Km 2 . Swedish population is, as noted above, 1/6 public health at Umea University, added, "Does this mean this is a calculated consequence that the government and public health authority think is okay? How many lives are they prepared to sacrifice so as not to . . . risk greater impact on the economy?": https://www.wsws.org/en/articles/2020/04/03/swed-a03.html 15 https://www.politico.eu/article/swedens-cant-escape-economic-hit-withcovid-19-light-touch/ (accessed August 2020). 16 https://forbetterscience.com/2020/04/07/swedish-scientists-call-for-evidencebased-policy-on-covid-19/ (accessed August 2020). 17 https://eu.usatoday.com/story/opinion/2020/07/21/coronavirus-swedish-herdimmunity-drove-up-death-toll-column/5472100002/ (accessed August 2020). 18 https://www.coronatracker.com/country/sweden/ (accessed August 2020). 19 https://www.statista.com/statistics/1104709/coronavirus-deaths-worldwideper-million-inhabitants/ (accessed August 2020). 20 https://www.ft.com/content/46733256-5a84-4429-89e0-8cce9d4095e4 (accessed August 2020).
of the population of Italy and the number of single person households amount to ∼2 million, whereas in Italy is ∼8 million (on a population that is 6 times larger though). Moreover, lots of Italian towns are characterized by a rather compact layout with aggregates of houses in the city center (the architecture that make Italian towns so beautiful for tourists). Sweden, on the contrary, has many US style towns with more space between houses and families and also has a larger surface area (450,295 κM² vs. Italy 301,338 κM²). Sweden is characterized by a high level of social and institutional trust, which is significantly lower in Italy. Finally, Swedish are on average more reserved and less outgoing than Italians, who are known to live among relatives in large communities where close contact and deep personal interactions are the social glue.
Having briefly reviewed these two approaches to the current COVID-19 pandemic, we next problematise around the epistemological stalemate that seem to arise as a consequence of their implementation in society.
---
GENERAL DISAGREEMENT AMONG EXPERTS: A PRESSING EPISTEMIC PROBLEM
The two cases we discussed above are particularly instructive and offer us an opportunity to problematise about the role of science in public debates and specifically around its role in the implementation of public health policies in situations of emergency. Both these approaches are, strictly speaking, scientifically informed and epistemically justified. In brief, this seems to be a case where experts disagree, and their epistemic authority cannot be taken as the benchmark for making complex political decisions that governments should implement afterwards.
As in the case of the outbreak in the UK, scientists disagreed on herd immunity and its effectiveness as a means of controlling the spread of the SARS-CoV-2. But the key point for society was not how effective herd immunity was compared to the lockdown, but how many lives the choice of herd immunity could cost 21 . Now, one can be an advocate of science and appreciate both the immense contribution that science has made in the constitution of our democratic States and in the solution of many daily and existential problems. Our societies certainly cannot do without science in individual lives or in the public square; however, in some cases-like the one we discussed here-the epistemic justification that underlies scientific expertise seems to be problematic and not solid enough to be uniquely used to model public health policies, which have strong normative and axiological implications for many millions of people and may affect how many lives would be spared or lost.
In this sense, both the Italian and the Swedish cases are paradigmatic examples of this problem. In Italy, the lockdown contributed to save may thousands of lives22 , even if the human cost of the infection has been very high. Biomedical experts insisted on suggesting harsh measures of social distancing, arguing that the primary and imperative goal was to save all possible human lives. Following this approach, however, could come at the price of impoverishing the country to the point that unemployment and company closures would cause direct and indirect harms to the population not much lower than those caused by Covid-19.
In Sweden, instead, the plan agreed between biomedical experts and government was to keep the infection curve as flat as possible without blocking the country. The authorities relied on the Swedes' compliance with the rules for preventing the contagion, without direct impositions and strict sanctions. This "optimized choice" could be defended in terms of cost-benefit analysis, but it remained unclear what could be the impact of this decision in the weaker sections of society (e.g., the elderly).
In the case we present here, the lack of strong epistemic justification, which allowed for different responses to be implemented, was due to a number of reasons, the most important of which were probably (i) the novelty of the virus (previously unknown to humanity); (ii) its relatively mysterious etiology (which implied that none could really be said to be a real expert); and (iii) the fact that experts were still learning about this infection.
This means that, as we write this paper, we are in a sort of paradigm change (6), where hypotheses and theories about novel scientific facts (the COVID-19) are very fluid (hence not mature) and subject to almost immediate falsification. This stage both favors and requires consistent disagreement among experts, who sometimesbona fide -even end up giving ambiguous or contradictory pieces of advice to the population (the most relevant case here being whether people should wear masks) 23 .
Part of the problem therefore seems to be epistemic in character, as it lies in the interpretation of what counts as a fact. Experts in different fields have very different beliefs about what facts are, what causes and effects are, what counts as reliable data, and indeed draw on very different sources of evidence to back their views (7). This, again, can be easily observed in the interpretations that have formed among experts around the ways to best deal with the pandemic. On the one hand, mathematical modelers (8) assumed the virus would behave like influenza. This assumption makes people think that we may allow the virus to circulate under controlled conditions and may suggest decision makers to adopt a lax response (like the Swedish one) that tries to contain the virus spread without, for instance, harming economic activities or citizens' freedom. Other scientists and public health experts (9, 10), on the other hand, have consistently called for mass testing, tracking, and adoption of stringent measures of social isolation, which are rooted in a very different belief; the belief that the virus is not anything like common influenza and shouldn't be allowed to spread, even under controlled conditions (Italy's response).
Another part of the problem, however, is political in nature and has to do with the way certain political decisions are translated into social policies. This also relates with the topic of who chooses who and what kind of expertise is invited into those committees responsible for taking crucial decisions on public health. In the cases we have analyzed in this paper, it is clear that politics has failed to listen to society as a whole and has not used the critical tool of public reason to critically analyse and refine -when needed-the medical experts' advice.
The approach we propose here thus suggest that one informed viewpoint isn't necessarily enough or better than another informed one, but that a wider range of opinions (provided they are reasonable and sound) ought to be listened to in order for effective decision to be implemented, especially if such decisions involve normative, axiological components and are applied to public health. The idea is not just that certain expert recommendations are based on a poorly established factual basis. This is a common situation, although often overlooked.
The point is that the biomedical experts are called to advise decisions that are political in character and have enormous consequences on people's lives based on their specific scientific expertise. Such scientific expertise, in many cases, does not include public principles, values or public procedures that are instead typical of a pluralist liberal democracy. Experts typically answer technical questions and provide recommendations that are related to their expertise. Decisions with more general consequences should be made by representatives of the whole society according to formalized procedures (11-13) (Pellizzoni 24 ).
---
THE NEED OF MORE PUBLIC DISCOURSE IN FIGHTING COVID-19
This means that one might call, as we do, for a broader and wider conception of expertise as well as for more representativeness, especially when scientific agreement has not crystallized yet and -like in the case we discussed above-biomedical experts alone seem unable to formulate broadly shared, uncontroversial, health policies.
For this reason, in such cases, politicians should not uncritically adopt only medical experts' opinions (which -as shown above-can be diametrically divergent); rather promote and articulate their discussions in the wider society (14), with attention to ethical and moral principles as well as to constitutional rights and to the rights of minorities (15). In brief, in light of public reason (16).
As O'Neill's brilliantly put it: "we have to supply a structure that the members of a wider, potentially diverse and unspecified, plurality can follow, by adopting and following principles of thought and action that an unrestricted audience can follow" (17). Such discussions should therefore promote a shared procedural and democratic agreement on how to act in situations 24 http://www.leparoleelecose.it/?p=38050 (accessed August 2020).
of emergency (e.g., COVID-19 pandemic) with high trust being put on reliable institutions (to avoid the dangers of relativism) but also on various other forms of expertise (not only epistemic ones).
We surely welcome the recent adoption of ethical principles in many local, regional, national and international committees, especially in medicine [e.g., (18)]. We also acknowledge that, nowadays, non-biomedical experts tend to be included in many bio-medical boards and commissions. For example, bioethicists had very important roles during the Ebola epidemic (19). However, with very few exceptions (20, 21), the current COVID-19 pandemic has highlighted significant underlying epistemic ruptures between medical science, other types of expertise, the general public, and the political response. This is because bio-medical experts, by virtue of their scientific authority, have been often uncritically recognized as more authoritative than other epistemic experts or non-epistemic ones (such as human rights activists, provided that they follow some basic principle of rationality and fact verification). This is perhaps a natural assumption to make in cases like the one we discussed in this paper; however, it may leadas we have attempted to show-to undesirable consequences and to a stalemate that may threaten the functioning of our societies. It is our opinion that the best strategy to bridge such ruptures and to avoid such problems is to open up science to public discourse and reason and include in any scientific committee responsible for taking crucial decisions on public health ethicists, bioethicists, psychologists, economists, moral and legal philosophers 25 . More importantly we believe that it may be even more fruitful to bring in and give voice to non-experts, or experts whose expertise is not based solely on "epistemic status, " but rather on either experience or political advocacy, of either the homeless, the immigrant, or other disenfranchised groups. This process may also contribute to make the epistemic expertise of experts "more representative of society as a whole."
In order words, echoing philosopher and legal scholar Melissa Williams, we argue that "a fair and just public discourse needs at least some direct representation of the voices of those who are minorities or live in dependence because the majority groups (here experts) do not share their particular history and experience" (15).
---
CONCLUSION
The type of expert's recommendations we have considered here, although technically flawless, are not neutral for individuals and for society and should therefore be evaluated according to procedures that do not merely assess the epistemic authority of their advocates or the adherence of their proposal to scientific criteria. The values at stake are different and often conflicting-the right to health, political freedom, the right to run a business-and the prevalence of one or the other should be entrusted to an assessment typical of decisions taken in the public sphere with the participation of various forms of expertise, chosen representatively. And just as we should never give up the contribution of (medical) experts (as in our case), so the state of emergency and the limited time available to make an effective decision, should never prevent an inclusion of normative and axiological elements in the public debate. In other words, we should be drawing on every type of potentially relevant expertise across the humanities, social and natural sciences and on insights from the wider society.
Thus, in our view, the involvement of non-biomedical experts and under-represented categories capable of drawing attention to general values, other principles and procedures should be welcomed as it could help making decisions that are more representative of society as a whole.
---
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
---
AUTHOR CONTRIBUTIONS
All authors contributed equally to the writing of this paper.
---
Conflict of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. |
Highlighting the importance of concepts of professionalism and work ethics in agricultural education in Botswana before and after the country achieved its independence is crucial in the education system. The purpose of this study is to explore the concept of professionalism and work ethics in the history of teaching and learning of agriculture in Botswana. The paper focuses on using the discussions raised in agricultural education development capturing the period prior to 1966 (Bechuanaland protectorate), and immediately after 1966 to the period of the 21 st century. Professionalism and work ethics in agricultural education provides an analysis on growth and progress in the profession. It can be safely argued that different stakeholders who were instrumental in revolutionizing the evolution of agricultural education in Botswana needed to integrate professionalism and work ethics in the program. | Introduction, Background, and Justification
This article will address and discuss the concepts of professionalism and work ethics in the teaching and learning of agriculture in schools before and after Botswana achieved its independence for the progress and development purposes of the education system in this country. The study discusses the history of agricultural education, documenting evidence of professional work ethics and comparing progress made in pre-and post-independence. Progress of agricultural education in the education system of Botswana has appeared to be relaxed but a bumpy and unpredictable path (Hulela and Miller, 2003;Rammolai, 2009) worth documenting. It can be noted that agricultural education in school's curriculum for Botswana was prompted by among other factors the tradition and culture of rearing of livestock and growing of crops by the people of the Bechuanaland protectorate since 1885. Much of the population of Botswana have had the privilege to be traditional subsistent farmers and therefore agricultural education became a highly recognized program in the education system in the context of growth and development with the hope that it would contribute knowledge, skills and attitudes as well as employment opportunities.
Studies by Rammolai (2009), Squire (2009), Baliyan et al. (2021), Moakofi et al. (2017) show that within agricultural education in the country, there are some challenges affecting the teaching and learning of agriculture to include teacher morals, values, and ethics. Concerns have also been raised regarding resource availability for school agricultural education programs particularly in the form of inadequately prepared teachers in schools. In addition, after more than 50 years since the country attained independence in 1966 (Suping, 2022) predictions about its contributions towards human resource development (Pheko and Molefhe, 2016) and the global emerging issues of poverty, climate change, unemployment, skill development (FAO, 2017) are not explicitly clear on their progress, let alone professionalism and ethical issues. Where the profession of agricultural education is now, one cannot make a supposition because as found by Pheko and Molefhe (2016) research on employability and other socio-economics issues is negligible and inaccessible thus creating a critical challenge for the country. The historical framework of agricultural education shows that it was well supported on sound grounds and justification for its inclusion in the school curriculum during the protectorate and post-independence eras. It can be argued that different stakeholders who were instrumental in revolutionizing agricultural education in Botswana made effort to integrate professionalism and work ethics in the program.
Agricultural education has since became a component of the education system thus requiring professionalism and work ethics' encompassing personal and corporate standards of behaviour expected of any professional body. There are no doubt that teacher education programs prepared teachers since its establishment in schools prior and after 1966. What needs to be done is to ensure the integration of professionalism and work ethics principles formalized to enhance teacher education through short courses, workshops, and in-service training.
This shows that agricultural education in Botswana is one area that is made up of proactive teachers and educators who have been influential in activities of rural development (farming), small businesses' (entrepreneurship) and extra-mural curricular (sports) in schools who form an active positive group of individuals in primary and secondary schools. However, as implied in the findings of a study conducted by Rammolai (2009), it was revealed that agriculture and its image Advances in Applied Sociology has begun to regress because of the inadequacy of trained teachers' limited perception that agriculture is basically about food production and very little reinforcement of practical relevance to students' needs are exercised. Rammolai (2009) further cautions that practical instruction should not be used as intensive manual labour but should be used to target specific technical and professional skills that are needed in the agricultural industry. These include 21 st century skills that encompass professionalism and ethics principles.
In recent developments, administration, management, and practices of school subjects, agriculture is grouped with other practical subjects, a motive that minimizes advancement and progress of teachers as managers under the auspices of practical subjects not adding value to the profession of agricultural education. By standards teachers should have broad and complete knowledge and skills needed for licensing in the profession, to help learners meet the challenges and opportunities of the twenty-first century. At the beginning of the 21 st century, two noticeable observations became apparent in agricultural education, making it clear to many that professionalism and work ethics in teaching the subjects would be compromised if not a forgone conclusion.
---
Purpose of Study
The purpose of this study is to document evidence of professionalism and work ethics in the teaching and learning of agriculture in schools before and after the country achieved its independence. The study is an evaluation of the progress of professionalism and work ethics in the field of teaching and learning since agricultural education was introduced in schools. Specifically, this study will explore the following objectives:
1) To describe/review/discuss the history of agricultural education of the preand post-independence era.
2) To document the evidence of professionalism and work ethics progress in the teaching and learning of agriculture in Botswana.
3) To compare progress made before and after independence in the teaching of agriculture.
---
Design of the Study
"According to Serena Balfour (Nee McConnel) the founder of Tutume McConnel Community between 1968-1970personal electronic communication, January 23, 2024) the model of education was meant to sustain the students and teachers who had no regular supply of food. Serena Balfour further explained that the school was established with half of the land intended to grow vegetables and gran crops, raise cattle and technical education (brigade education). Ms Balour further stated that 'we had some wonderful volunteers' young teachers from Denmark, United Kingdom and Sweden interested in maintaining the school farm and since we depended on it for our food. The farm became a necessity for our students to learn about farming" she said.
This is an historical study compiled through desk research exploring the un-Advances in Applied Sociology derstanding of professionalism and work ethics behaviours in agricultural education during the pre-and post-independence. Historical studies as described by Coreil (2008) are used in social sciences to understand progress and relationships between constructs. According to Green and Cohen (2021), desk research design ensured data is derived from secondary sources that do not require direct contact between the researcher and participants. The study used data that is perceived to be indirectly not complicated to infringe into human resource subjects.
The existing literature reviewed, included policy documents of the Ministry of Education such as school agriculture syllabi, teacher education for colleges, (primary and secondary teachers), informal discussions with teachers of agriculture and senior citizens of the ministry of education, journals, and policy documents. Some examples and illustrations were drawn from experience and previous research into agricultural education stories of the Bechuanaland protectorate and after independence experience. This study provides evidence of work ethics and professionalism indicators.
The history of agricultural education of the pre-and post-independence era in Botswana This is described in three stages, before, during and post missionaries' era.
Stage 1: In the period prior to the arrival of missionaries, some forms of cultural informal education existed within the Bogwera and Bojale initiation schools for girls and boys before and after the Bechuanaland protectorate were formed in 1885. The informal cultural activities taught to women and children include womanhood skills, domestic and agricultural activities, sex and behaviour towards men and women, home management skills that include sewing, cooking, carving mortars, and making tools like hoes, axe handles for cultivating the soil or weeding, and weaving beads. This type of education lay the basis for values within the professional guiding principles expected from learners. The result was that women qualified for motherhood and marriage teaching while boys were introduced to hunting and livestock rearing-traditional living skills based on gender (Moorad, 1993;Mafela, 1994). Based on this form of education professionalism and work ethics progressed on the basis of ideas and model of education of the era. Boys and girls developed some ethics through initiation schools.
Stage 2: The period called the Bechuanaland Protectorate era (the missionary involvement period) can be equated to the colonial education during which different missionaries were in the Protectorate. History shows that Missionaries arrived at different times in the protectorate with the goals of teaching the Christianity and health services. In this period, to achieve their goals, the majority of the missionaries' established schools with small farms/gardens to teach agricultural science vocations and academic skills for food provision to local families hence the introduction of agriculture (agricultural education) coupled with the Christian curriculum.
Based on Moorad (1993), the establishment of the teaching and learning of K. Hulela DOI: 10.4236/aasoci.2024.141005 73 Advances in Applied Sociology agriculture in mission schools in this era was not based on professional development knowledge and skills on agriculture. This was based on what could be termed "request for practical subjects to be offered as the need for curriculum development to go beyond scriptural and into 'Industrial Education'. According to Moorad, the education was anticipated and perceived to benefit the rural families and community", hence it was community-based education (Moorad, 1993). In this period, schools were created as a way of socialising children into the accepted values and norms of society and preparing them for their later adult roles. For example, as pointed out by Wass (1972), Moorad (1993), the establishment of Moeng College was conceptualized in the 1930s as an initiative of the Bamangwato people of the Bechuanaland Protectorate and became successful after the Second World War. The goal of this school according to Wass was for the advancement of African children through higher education and training hence the establishment of a large farm with agriculture infrastructure. Regarding Moeng College, Moorad (1993) stated that "the school offered courses in agriculture, animal husbandry, mechanics, building, carpentry, bookkeeping and typing. It had a farm and typified a real community situation where the students were all expected to take part in the farm activities" (p. 65). Wass (1972) further explained that around 1935 when Moeng was being planned for, in the then Director of Education H.J.E. Dumbrell report of 1935, a proposal for the development of adult schools to cater for the needs of men and women who could only attend in the late afternoons and evenings was made. According to Wass, the Dumbrell report referred to the Dutch Reformed Church's success story in Mochudi and the Kalahari village areas which had established adult schools successfully without assistance with the desire to learn. The report expanded the idea of adult schools to use the services of other Departments to ensure adults were taught things of real value, specifically mentioning agriculture related fields of Agriculture, Medical and Veterinary. This marked the birth of agricultural science in schools' or the school curriculum in the Bechuanaland Protectorate. Nkomazana and Setume (2016) wrote that of all missionaries, the London Missionary Societies (LMS) then referred to as the United Congregation Church of Southern Africa (UCCSA) are believed to have arrived in the southern African region as early as the 1800s. They first established Tiger Kloof Educational Institution in Vryburg, South Africa, before crossing into Bechuanaland Protectorate to start Moeding College at Otse in the Southeast of Botswana in 1962. The curriculum at Moeding College in the protectorate was informed by that of Tiger Kloof in South Africa which according to Arko-Achemfuor (2014) applied the principle of education with production raising of vegetable gardens and rearing of cattle. Agriculture formed the core-curriculum before and after 1966 at Moeding College in Botswana.
In 1801, another group of missionaries called the Dutch Reformed Church settled at Mochudi, founded by Rev. Henri Gronin which began working in 1863, to start the first home craft school to prepare women for home living. The Advances in Applied Sociology online history of St Theresa mission school in Lobatse, as described by Sumani (2017) also shows that the Zambezi mission by the Roman Catholic Church missionaries' decree signed by Pope Leo XIII in 1879 entrusted to the Society of Jesus to kick-start the missionary work in Bechuanaland Protectorate. These missionaries arrived in Tati, in the Northeast of the protectorate rear Rhodesia now Zimbabwe, on the 17 th August 1879. This took a while before the establishment of the Roman Catholic Church (RCC) mission at Khale Hill in 1923. The missionaries at Khale Hill then started the Adolescent Training Centre in 1934 as summarized by Hulela and Miller (2003) to teach agriculture with boys only and started a large farm of 1000 Acres, with livestock, orchard trees and crops such as maize, peas, sorghum, potatoes, pasture, teff, and cowpeas. According to Bayani (2015) the Lutheran Missionaries arrived in Botswana as Hemansberg missionary society in 1957 as invited by the president Andres Pretorius of the Transvaal in South Africa.
Stage 3: The pioneers' era also overlaps with the missionary era: As mentioned by Moorad (1993) even though a mention is made that Agricultural Education was first introduced in schools through Missionaries, two pioneers such as Kgalemang Tumediso Motsete and Pratrick Van Rensburg also played a significant role in the 1930s and 1960s respectively. The first of these pioneers associated with this historiographical or historic role in history during the Bechuanaland Protectorate era was identified as Kgalemang Tumediso Motsepe. This distinguished Botswanan citizen, established a mixed school called Tati Training Institute as the first secondary school in the North-eastern part of the country that later moved to Nyewele in Tshesebe in 1932 on political grounds. According to Melczer (2019), Motsete's mixed school offered agriculture as a key subject in the curriculum occupying one-third of the space in the school curriculum. Second in this stage of pioneers and movers was Henry James Edward Dumbrell, then Director of Education who proposed agricultural science to be studied as part of the education curriculum (Hulela and Miller, 2003).
Third in this notable educational historical venture was Patrick Van Rensburg who arrived in the Protectorate in the early 1960s just before the country achieved independence in 1966. Van Rensburg, a South African by birth settled in Serowe with the intention to establish a progressive form of education at Swaneng Hill School, Madiba secondary school in Mahalapye and Shashe River schools in Tonota near Francistown. The three schools were started to admit primary school leaving examination (PSLE) standard seven dropouts. The school's curriculum and subjects placed emphasis on practical subjects' including agriculture, building, carpentry, metalwork, technical drawing and typing. The Van Rensburg approach to agricultural education was revolutionary and perceived to be a radical model of education at that time as the three schools established were attached to farms, encouraged learning by doing which was modelled as education with production in Botswana that included idiosyncratic forms of vocational training: on-the job education and active production for the community around schools. Advances in Applied Sociology
The pre-independence era of agricultural education saw some pioneers playing a momentous role in the development of agricultural education. Throughout the Bechuanaland Protectorate era as indicated by Moorad (1993) and Rammolai (2009) the teaching and learning of agriculture in schools was characterized by the missionaries and some key individual pioneers made significant contributions to the development of agricultural education in the country. One historic moment worth noting is the formation of the then University of Basutoland, Bechuanaland and Swaziland (UBBS). According to the University of Botswana (2023 website), UBBS was formed on January 1, 1964, following an agreement reached in the mid-1962 between the High Commission of the then three British protectorates (Territories) (Hulela and Miller, 2003). These schools taught agricultural activities as part of the extra-mural curriculum which was used for manual work and for punishment of students with no formalized assessment.
---
Post-independence era:
When the country reached the close of the Bechuanaland Protectorate era to become a new independent state called "Botswana", the teaching and learning of agriculture was offered at Moeng College, Moeding College, St Joseph's College and the three schools Swaneng Hill school, Madiba school and Shashe River school. There was a proposal for integration into the new education system, modernized and re-structured to become an integral part of the school curriculum. This occurred because at that point in history, agricultural education within the curriculum was scant characterised by minimal assessment. In 1970 McConnell College was established in the rural village of Tutume in the central district and agricultural science was strong in the curriculum supported with the brigade education and a farm for the teaching of agriculture.
During the same year, the LMS also expanded its mission and set up Maun According to Hulela and Miller (2003) the story of agricultural education soon after the independence rapidly started because of the review of the entire education system by the government of Botswana that produced the 1977 Education for Kagisano policy to enhance the balance or imbalance in the school curricu- lum. Through this policy, Agriculture became compulsory subjects in junior secondary schools, an option to senior school students while it remained part of environmental science in primary schools' curriculum. Post-independence developments affecting agricultural education can be outlined on yearly basis to include the following: Agricultural education developed syllabuses and booklets, manuals and textbooks to guide the teaching of agriculture, primary, junior secondary schools and senior secondary school as well as the vocational agriculture for technical and vocational education. This was facilitated from a panel of teachers as they formed the Botswana Agriculture Teacher Association (BATA) to coordinate activities for school agriculture curriculum.
8) 1991-1995: The establishment of the diploma program/course in the agricultural education program at BCA/BUAN and the diploma in secondary education at Tonota College of Education for training teachers with a component of agriculture. 9) 1994: Approximately sixteen years later, Botswana's education policy was reviewed leading to the 1994 Revised National Policy on Education (RNPE). This revised policy recommended some diversification of the curriculum by incorporating among others foundational skills, the vocational orientation of the academic subjects, and practical subjects like agriculture. 10) 1997: Department of Teacher Training and development: was established to provide leadership and direction for in-service and pre-service teacher training including teachers to teach agriculture at primary and secondary schools. Teacher professional development was an emphasis in the RNPE which replaced the Education for Kagisano (Mphale, 2014).
---
Important achievements
A study by Mautle and Weeks (1994) As alluded by Amadi and Blessing (2016) the role played by agricultural education in societies is countless in the provision of skilled personnel for the workforce, and in the establishment of vibrant research for farmers' continuous development of new knowledge. Agricultural education also ensures that young farmers clubs, rural development communities remain lively as it provides platform for active discussions and learning environment to meet the national food drive. Currently, it is estimated that each of the 33 senior secondary schools would have at least 7 teachers with BSc or above qualifications in Agricultural education (making 231 teachers). The 207 public junior secondary schools would each have approximately 6 teachers of which a total of 1260 teachers with diploma in secondary education and Bachelor's degree in agricultural education teach agriculture in public junior secondary schools.
Objectives 2: To document the evidence of professionalism and work ethics in the teaching and learning of agriculture in Botswana.
To respond to this objective, it is important to note that agricultural education practitioners in the Bechuanaland protectorate were not trained nor prepared as educators. According to Paschal (2023) teacher education plays an important role in the teaching and learning process because the profession of teaching is realized as the mother of all the other professions that are proven to happen in the universe (p. 82). According to Pansiri et al. (2021) generally, schools were supposed to be a safe place for learners but as found in their study unethical conduct in the education system in Botswana is heightened by lack of an Africanized ethical code of conduct for educators and double-dipping by the public officers. A study conducted by Bagwasi (2018) which traced the history of the education of Botswana from pre-independence to present has looked at among others policies that have influenced its development in relation to the western education system. The changes as alluded to have occurred in the education system after 1966 were noted to include teacher professional development. This study uses conceptualized professional ethics by Paschal (2023) as studied in Tanzania to assess their existence in the pre-and post-independence (Table 2). Analysis of literature revealed evidence of indicators of shortfalls of professionalism and work ethics issues of education as some have progressed from pre-to post-independence.
In a study conducted by Moswela and Gobagoba (2014) the findings revealed that teacher trainees were knowledgeable about what ethics and professionalism entail but that did not translate to practice as some teacher trainees still indulge in love affairs with students a situation that shun to professionalism and work ethics. Professionalism and work ethics in the teaching and learning of agriculture are important critical as participants (retired and on-the-job teachers), educators, community and historical documents perused through showed that pre-and post-independence agriculture has always been taught or performed
---
No record Evidence
Attire to boost appearance and confidence, one or a combination of the following philosophical beliefs. According to participants agricultural education was introduced in Botswana education system with the belief that it 1) imparts knowledge and skills (behaviorism) to change, 2) prepares a child to enter a certain cultures in the society (conservatism), 3) building of existing knowledge/skill acquired at home (constructivism), 4) mastering a skill (essentialism) developed, 5) building on needs of a learner (humanistic) 6) usability of practical skills (pragmatism) and 7) active learning (progressivism). Based on philosophies held by stakeholders, the teaching and learning in agriculture was introduced with both practical and theory components administered and supervised during and after class hours.
Professionalism as described by Creasy (2015) is therefore an ideal situation described through several aspects and characteristics as shown in Table 1 which individuals and occupational groups aspire, to distinguish themselves from other professional workers. According to Lea ( 2019) it has been found to have several understandings, rooted in disciplines of philosophy, sociology, and on studies of professions. To measure progress in professionalism about agricultural education in the history of Botswana to date, some characteristics borrowed from a study conducted by Anitha and Krishnaveni (2013) were utilised or drawn on to show evidence or no evidence from pre to post independent era. In a study conducted by Lashgarara and Abadi (2009) at Azad University in Tehran, in Iran, the findings indicated that more than forty characteristics of an effective agriculture teacher were categorised into areas of instruction, community relations, marketing, professionalism/professional growth, program planning/management, and personal qualities. Professionalism is also about the excellence and character of a teacher in the teaching environment. This would mean teachers having the knowledge, skills, personality, and practices that the teacher must acquire to be effective educator in the career of teaching the subject (OECD, 2016). This should not be seen to refer to wearing a suit or carrying a briefcase but instead should be seen to mean conducting oneself with responsibility, integrity, accountability, and excellence which also implies communicating effectively and appropriately in being productive.
In a story entitled Mishandling agricultural practical projects in secondary schools in Botswana, it is evident that agriculture is one of the secondary school subjects which to a certain extent has been compromised to what could be termed "teacher professionalism". The article spells out that teachers of agriculture have raised as a concern the fact that practical assessment was not part of K. Hulela DOI: 10.4236/aasoci.2024.141005 82 Advances in Applied Sociology their job description and therefore it must be removed from their supervising and assessment practical projects. Teachers' perception about their subject or profession is thus an important concept of professionalism. Teachers' roles and responsibilities include conducting instructions (both practical and theoretical)
to teach students, enhance youth leaderships through integration into classroom instruction, imparting knowledge, and skills, creating plans and informing students about development in agriculture (Rice & Kitchel, 2016). O'Sullivan, van Mook, Fewtrell and Wass (2012) view professionalism as having become important in the 21st century because professional values and behaviours are intrinsic to all practices yet remain one of the most difficult subjects to integrate clearly into curricular. Professionalism and code of ethics in teaching and learning of agriculture as indicated by the Uganda Ministry of agriculture, Animal Industry and Fisheries (n.d.) lies at the centre of development of the profession. As alluded to by Talbert, Croom, LaRose, Vaughn and Lee (2022), the profession like that of several other fields of education is an occupation that requires a long specific program of preparation and uses a code of ethics to guide the conduct of individuals in the profession. Teaching is one profession that needs an individual to act with integrity, show courtesy, respect to students, other educators, and stakeholders' one works with, and be responsible and accountable in all you are doing.
Work ethics: refer to the accepted morals, values, and principles of right conduct for a profession or area of service such as teaching agriculture. KAR (2016) states that these are standards or "rules" that enact obligations to refrain from committing a crime like missing classes, failing to listen to learners and colleagues in the job, stealing, cheating, committing malpractices, rape, murder, assault, refusal to perform a duty in the profession, misconduct, or fraud in workplace. In Botswana, teachers of agriculture in secondary schools work with small livestock, crops, and students as well as the community or parents of these children therefore the issue of ethics in agricultural education is important. In the past two or more decades, some changes, revolutions, conflict awareness have emerged coupled with malpractices, teacher absenteeism, abusive and violent behaviours of refusing with practical marks, sexual relationships, position abuse and several other factors have come up as forms of concerns in schools. Zimdahl and Holtzer (2018) emphasise that, the classroom offers an effective starting place for ethical education and unfortunately the curricular offering or focusing on ethical principles their application in agriculture and related fields are rarely available in higher public institutions of learning. The authors continue that the courses on ethics should become a key component of agricultural education curriculum because these issues are increasingly a concern. For example, when taking into consideration the mishap that would occur in education where educators re-define their responsibilities and roles in teaching would explain ethics, to mean the rightness or wrongness of actions by an individual or a group of professionals (CAST, 2005). There are three secular ethi-Advances in Applied Sociology cal traditions or what theorists call theories explaining issues to do with civil, human rights, wrongs and privileges that exist and can be offered in teacher preparation.
The findings from Zimdahl (2000) revealed that out of the 59 universities in the United States of America which were surveyed, only fifteen (15) were offering agricultural ethics in their curriculum or as a topic in the agricultural curriculum. The study advanced several reasons why it is so uncommon for universities to embrace agricultural ethics in their curriculum that include lack of ethics and philosophical knowledge in the concept. According to Elliott and June (2018) ethics education became part of higher education in the past twenty-five years and today there are a good number of universities offering ethics education. Different courses are offered in different universities in the USA to instil the ethics of teaching and learning agriculture (Elliott and June, 2018).
In Tanzania for example as alluded to by Mfaume and Bilinga (2017) professionalism in education were attributed to teachers' low salaries and remunerations, poor living and working conditions, influence of science and technology, lack of professional knowledge and poor management and infrequent visits and inspections of schools. Professionalism could be related to educational policy reforms and practices in education on how they are managed to address teacher's needs and recruitment procedures, supportive policy environment and professional development (Ifanti and Fotopoulopou, 2011). Day & Sachs (2004) also found that teachers engaged in work which has fundamental moral and ethical as well as instrumental purposes in teaching and learning. This is so because programs of education are changing professionally and ethically their day-to-day practices in school environment, conditions for working, teachers' structures, and teacher-to-students ratios have also changed. As indicated by Career development, agriculture teacher has a wide range of responsibilities as shown in Table 3.
---
Summary and Conclusions
1) This article is set to address and discuss the progress of the concepts of professionalism and work ethics in the teaching and learning of agriculture in schools before and after Botswana achieved its independence for the purposes of the education system in this country. The stages in the history of agricultural education are evidence of progress in professional responsibilities of a teacher (Creasy, 2015) reflecting teacher education, maintenance of accurate records, communicating with stakeholders (students, families, and others), working in, and contributing to the school and communities, growing, and developing the profession. Teacher education programs agreed upon with the building of dispositions or characteristics on which the teacher candidates would display in the profession.
2) Progress of agricultural education in the education system of Botswana has appeared to be relaxed but a bumpy and unpredictable path due to in adequate resources. Overall, the evidence of professionalism and ethics in agricultural education is key to development of agricultural education in terms of its expansion and contribution to the economy of any country. The review has revealed that both pre-service and professional education are critical in developing principles of professionalism and work ethics.
3) Teacher recruitment is the starting point for professionalism and work ethics in teaching. In developed nations some assessments strategies and standards apart from the degree qualifications that the potential teachers hold, would also be standards to be met used for teacher selection to ensure only individuals with high capability would enter the profession and used to control the degree of self-regulation and credentials.
4) Finally, teachers' recruitment criteria for example would be strongly recommended to build on teacher professionalism and further develop guiding standards for recruitment. According to Calvin and Pense (2013) there are five factors which appear to be a challenge and solutions to issues relating to recruitment into a career in agricultural education-time, the economy, family, technology, and image.
---
5)
In conclusion, although it's easy to trace the history of agricultural education, it is not obvious on how professionalism progressed as there are no ethical standards and policies to measure the advancement. The program has already made some contributions socially, economically and culturally and it is the government's responsibility to monitor and maintain its strongholds while improving the weak areas that have to do with human resource development, limited capacity for policy development as well as infrastructure upgrading in education.
---
Conclusion
This paper has traced the progress of professionalism and work ethics in the history of agricultural education in pre-and post-independence Botswana. It has been evidently shown that the teaching of agriculture in pre-independence as
---
Advances in Applied Sociology
---
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper. |
Introduction: Men who have sex with men (MSM) are one of the key population groups who have a high risk of transmitting and acquiring HIV. They are being stigmatized due to their behavior. Therefore, it is of prime importance to identify those correlates of stigma among MSM to take measures to minimize them. Objective: To determine the correlates of behaviorrelated stigma among MSM in Western Province, Sri Lanka. Methods: A cross-sectional study with an analytical component was conducted among MSM in the Western province, of Sri Lanka. The sample size was 564. Participants were recruited using respondentdriven sampling. Data collection was done using two interviewer-administered questionnaires, the "Behavior related stigma Scale", a tool developed and validated by the investigators was used to assess the level of stigma, and a separate questionnaire to assess the correlates of stigma which was also developed by the investigators. Correlates of behavior-related stigma among MSM were determined by multivariate analysis using an adjusted Odd's ratio. Results: Advancing age (>29 years) (p= 0.01), being educated up to grade 10 (p = 0.039), family and friends considering homosexuality as a psychiatric disease (p -0.018), the experience of sexual abuse in childhood (p <0.001), the experience of nonverbal harassment from relatives (p <0.001), being arrested by police during lifetime (p<0.001), not carrying condoms as they were not supplied (p=0.007) were positively associated statistical significance with a high level of behavior related stigma among MSM. Being educated regarding HIV/AIDS from the health sector and media were negatively associated. Conclusions: There are modifiable factors associated with behaviorrelated stigma among MSM. Awareness programs should be conducted for the public to sensitize them regarding same-sex behavior, thus minimizing harassment from society. | Introduction
Key Populations (KP) are the groups that have a high burden of acquiring Human immunodeficiency virus (HIV) in many settings. Men who have sex with men (MSM) are identified as one KP in both international and Sri Lankan settings (United Nations Programme on HIV/AIDS [1]. They have been classified due to their key sexual behavior, which has made them more prone to acquiring HIV.
Men who have sex with men are a hidden population in both international and Sri Lankan settings. Having unprotected anal intercourse with male partners mainly contributes to the high risk of contracting HIV by MSM [2]. A combination of behavioral, socioeconomic, and structural factors contributes to the increased risk, vulnerability, and/or burden of acquiring HIV infection. Access to relevant health care and other services is significantly lower in this group than in the rest of the population [3]. The stigma vested upon MSM due to their homosexual behavior is a key contributory factor for their reduced access to health care services [4].
Stigma is an attribute, behavior, or reputation that is socially discrediting in a particular way [5]. When stigma is acted upon, the result is discrimination [6].
Stigma and discrimination adversely affect the social, psychological, and medical aspects of the affected one's life. They get deserted from their families get rejected from school and become school dropouts. These factors lead them to a more vulnerable life. Further, multiple health complications also occur. They are anxiety, depression, self-harm, suicidal attempts, poor selfimage, and low self-esteem. Stigma and discrimination are identified as key obstacles to universal access to HIV prevention, treatment, and care [7].
Also, stigma and discrimination lead to feelings of shame, worthlessness, and fear of being rejected. In Sri Lankan settings MSM are discriminated against by police, legal professionals, armed forces, and other government officers due to lack of understanding and existing legal provisions. There is global as well as local evidence that proves that MSM faces discrimination from society [8,9]. It has been evident in a local qualitative study that they are discriminated against by known people such as family, friends, and neighbors [10]. It has been understood that medical mistrust mediates the relationship between stigma and engagement in care [11].
Men who have sex with men are being stigmatized in society, mainly due to their sexual orientation. They are being forced into hetero-normative marriages, which ultimately leads to disruption of their family lives [12]. Their marital status plays a significant role in the stigma vested upon them. They face different types of harassment from society. They can be non-verbal, verbal, physical, and sexual. These harassments contribute to the development of self-stigma which keeps them separated from the mainstream [13].
Reducing stigma and discrimination is identified as one of the four critical enablers that help to overcome major barriers to service uptake, including social exclusion and marginalization, criminalization, stigma, and inequity among MSM [14]. Ending the AIDS epidemic in 2030 has been identified as a key mandate to achieve sustainable development goals [15]. Ministry of Health Sri Lanka is targeting to end the AIDS epidemic by 2025, five years in advance of the targets set by the United Nations [16]. Measures can be taken to reduce the behavior-related stigma among MSM by minimizing their exposure to modifiable determinants which are identified in the study.
To determine the correlates of behavior-related stigma among men who have sex with men in Western province.
---
Methods
---
Study Design
A cross-sectional analytical study was conducted among MSM in the Western province, of Sri Lanka.
---
Participants and Data Collection
Data collection was carried out from July to November 2018. The sample size was 564 which was calculated scientifically and the number from each district was taken proportionate to the MSM population in each district. Participants aged more than 18 years, residing in the study setting for more than six months, and who had a valid peer recruitment coupon were included in the study. Individuals diagnosed with a mental disorder were excluded.
Respondent-driven sampling was used as the sampling technique. Eleven seeds were selected. Data was collected to assess the correlates using an interviewer-administered questionnaire, developed and validated by the investigators. Behavior Related Stigma Scale (BRSS) which was developed and validated by the same investigators was used to assess the level of stigma. Data collection was done by four sociology graduates.
---
Data Analysis
Sample proportions and population proportions were analyzed using RDS -A version 7.0 package. Since they were almost similar, the unweighted analysis was done to identify the correlates of behavior-related stigma among MSM. For the analysis of correlates of behavior-related stigma, each correlate was analyzed using bivariate crosstabulations using SPSS version 22.0. The chi-square test was used to identify significant correlates. The possible variables were dichotomized and unadjusted odd's ratios were calculated. Multivariate analysis was used to identify the unconfounded correlates of behavior-related stigma by entering correlates with p-value <0.2 into a logistic regression model.
---
Ethical Approval
Ethical clearance was obtained from the Ethics review committee of the faculty of Medicine, University of Kelaniya, Sri Lanka, and administrative clearance was obtained from the provincial and the regional directors of health Services offices.
---
Results
The majority of the study participants were 30 years or older with a mean of 35.2 and SD of 12.3. Nearly three-quarters of the participants (72.5%, n= 361) were Sinhalese and 72.9% (n=329) were Buddhists. Socioeconomic, demographic, and special characteristics related to MSM are shown in Table 1.
Among them only advancing age was significantly associated with statistical significance. Factors such as ethnicity, religion, marital status, monthly personal income, residence, or being a formal worker were not
---
Vol 4 Iss 4 Year 2023
---
MedNEXT Journal of Medical and Health Sciences
significantly associated with a high level of behaviorrelated stigma among MSM while having an educational level up to grade 10 was statistically significant (p = 0.006). Engagement in other work in the last six months was significantly associated with a high level of behavior-related stigma (p = 0.02). The majority of those who were forced into a hetero-normative marriage had a high level of behavior-related stigma (60.7%) and they were significantly associated (p <0.001).
The belief of family members that homosexuality is a mental disorder (p = 0.01 experiencing childhood sexual abuse by MSM (p= 0.04), being harassed by society in their lifetime (p 0.01), verbal harassment from the police (p = 0.004) and being arrested by police during lifetime (p = 0.003) were statistically significant associations.
The proportion of MSM participants who had been having same-sex relationships within the past 10 years and had a high level of behavior-related stigma was 74.1% (n=318) with a statistically significant association of p = 0.04. Having anal sex with a male partner and a high level of behavior-related stigma was associated with statistical significance (p= 0.003).
Similarly, 61.5% of the participants who had oral sex with a male partner had a high level of behaviorrelated stigma with a statistical significance of p < 0.001. However, having more than one male partner was not statistically significant with a high level of behavior-related stigma (p = 0.5). There was a statistically significant difference between having vaginal sex with a female partner and a high level of behavior-related stigma (p= 0.001). Having oral sex with a female partner was also statistically significant with a high level of behavior-related stigma (p = 0.014). There was no significant association between receiving benefits for sex as money, goods, or drugs (p= 0.8, p= 0.2, and p = 0. 3).
Not using condoms with a casual partner always when having sex during the past three months (p = 0.001) and unaffordability as a reason for not carrying condoms were significantly associated with behaviorrelated stigma among MSM participants (p <0.001). Almost two-thirds of the participants who stated that they didn't carry condoms due to unaffordability had a high level of behavior-related stigma.
Among the different groups of people to whom the key behavior is revealed, there was a statistically significant association between revealing to healthcare workers at both STD clinic settings (p=0.001) and non-
---
Vol 4 Iss 4 Year 2023
---
MedNEXT Journal of Medical and Health Sciences
STD clinic settings (p=0.002). Among the participants who had ever heard of HIV/AIDS, the majority of them (72.1%, n=401) had high levels of behavior-related stigma. Nevertheless, the association between the status of ever heard of HV/AIDS was not statistically significant (p = 0.7). The association of getting knowledge regarding HIV/AIDS from health services was significantly associated with a high level of behavior-related stigma among MSM (p<0.001).
There was no statistically significant difference between the usage of counseling services (p = 0.4) and is useful to solve problems related to same-sex behavior (p = 0.3). Ever use of alcohol and using alcohol once a week (p<0.001) or more (p= 0.003), use of illicit psychoactive substances ever (p= 0.001), using illicit psychoactive substances orally (p=0.001), through inhalation (p= 0.003) were significantly associated with a high level of behavior related stigma among MSM.
There was no statistical association between not being aware of laws regarding homosexuality and a high level of behavior-related stigma among MSM (p = 0.9). All the participants who were aware of laws affecting MSM thought that they were being discriminated against by those laws.
Among the variables considered in the LR analysis, 13 factors were independently significantly associated with a high level of behavior-related stigma among MSM after controlling for confounders, as shown in Table 2. They were, age more than 29 years (aOR= 2.1, 95% CI: 1.2 -3.8), being educated up to grade 10 (aOR= 1.73, 95% CI: 1.03 -2.9) , engagement in a mode of income other than the main occupation (aOR= 0.34, 95% CI: 0.14 -0.86), family and friends considering having sex as a mental illness (aOR= 5.4, 95% CI: 1.34 -22.0), experience of sexual abuse in childhood (aOR= 8.03, 95% CI: 3.0 -21.6), nonverbal harassment by relatives (aOR= 5.9, 95% CI: 2.2 -15.6), being arrested by police in lifetime (aOR= 0.02, 95% CI: 0.003 -0.17), having oral sex with a male partner (aOR= 0.27, 95% CI: 0.16 -0.46), not using condoms because didn't receive them from anybody (aOR= 2.2, 95% CI: 1.23 -3.78), gaining knowledge regarding HIV/AIDS from health sector and media (aOR= 0.47, 95% CI: 0.26 -0.85), consuming alcohol once a week or more during the past one month (aOR= 2.7, 95% CI: 1.53 -4.8), and inhalation of psycho active substances (aOR= 2.6, 95% CI: 1.4 -4.7).
---
Discussion
There were several significant factors associated with behavior-related stigma among MSM identified in the study. Most of them were modifiable. Among the socio-demographic characteristics studied, MSM who were educated up to grade 10 have been identified as being associated with a high risk of having behaviorrelated stigma (p = 0.039) which is consistent with the finding of a study conducted in South Africa [17].
Although the advancing age of MSM (>29 years) was independently associated with a high level of behavior-related stigma among MSM in the current study (aOR = 2.1, 95% CI = 1.2 -3.8, P<0.05) a similar study conducted among Vietnamese homosexuals in 2011 has not identified age as a significant correlate in multivariate analysis with significance values more than 0.05 [18].
Further, the Vietnamese study has identified that MSM who had ever been married (married/ separated or divorced) were more likely to have a high level of selfstigma (aOR = 2.49, 95% CI = 1.02 -6.09, p<0.05), whereas the current study did not identify current marital status as a predictor variable for behavior related stigma among MSM (p>0.05). The possible reasons for the above-mentioned difference could be that the current study measured behavior-related personal stigma which is a combination of self and perceived stigma whereas the compared study has measured selfstigma only. Another possible reason could be the social and cultural differences in the two study settings. Homosexuality is considered a crime in Vietnam.
Although men having sex with men is criminalized by section 365A of the penal code [19], the conversation regarding repealing these laws is being carried out in Sri Lanka. Using alcohol once a week or more frequently and inhalation of psychoactive drugs other than alcohol have been significantly associated with high levels of behavior-related stigma in this study (p = 0.001). Even though, the use of alcohol and drugs has been assessed by Ha et al (2014) in Vietnam, they have not shown a significant association [18].
Men having sex with men is considered a mental illness by family/ friends (p= 0.018). Having experienced sexual abuse in childhood (p 0.05) were more likely to have a high level of behaviour-related stigma among MSM which was consistent with the findings of a Chinese study [20]. No studies have been conducted to assess the association between revealing the same sex behavior to either a family member, close friend, HCW at an STD clinic setting, or HCW at a non-STD clinic setting and a high level of behavior-related stigma among MSM. Therefore, the results of the current study could not be compared.
Among the MSM participants, those who have gained knowledge of HIV either from the health sector (p<0.0001) or from media (p = 0.013) were less likely to have a high level of behavior-related stigma compared to those who did not gain knowledge on HIV from the sources mentioned above. Although Ha et al (2014) have assessed the association between knowledge of HIV and stigma (self, perceived, and enacted) among MSM in Hanoi, Vietnam, the source of knowledge has not been assessed. However, the most probable reason for the above finding of the current study could be that gaining knowledge regarding HIV/AIDS through the health sector is considered as the most reliable source for gaining knowledge regarding HIV/AIDS in Sri Lankan settings. In the meantime, gaining knowledge regarding HIV/AIDS through media increases awareness not only among the MSM but also among the general public as well.
Receiving counseling services to discuss problems related to same-sex behavior among men was not significantly associated with behavior-related stigma. Similarly, there were no previous studies that provided evidence for such association at both international and local level. Although there are punitive laws that criminalize same-sex behavior among men, awareness, and perception regarding these laws were not significantly associated with behavior-related stigma among MSM in this study. No studies have been published with evidence regarding above mentioned expected correlation as well.
---
Strengths and Limitations of the Study
Since a detailed study to explore the correlates of stigma among MSM has not been conducted in Sri Lanka, the findings of the study will be useful in planning programs to minimize stigma among MSM and improve their access to health care. This will eventually help to reduce the burden of HIV infection in the community.
The value of a cross-sectional study design is limited whenever there is a possibility that the dependent variable may change with the participants' risk behavior. Therefore, the absence of information on temporal relationships may render it difficult to separate the predictor variables from their outcomes [21].
---
Conclusions
Consideration of same sexual activities among men as a psychiatric illness by friends and family, experience of sexual abuse in childhood and nonverbal harassment by relatives, being arrested by police during a lifetime, not carrying condoms because they were not received from anybody, consumption of alcohol once a week or more frequently and inhalation of psychoactive substances were significantly associated with behavior related stigma among MSM. Gaining knowledge regarding HIV/AIDS from the health sector and media was negatively associated with a high level of behaviorrelated stigma among MSM. Proper awareness and sensitization of the public regarding male homosexuality and appropriate education regarding HIV/AIDS through the health sector and media is of prime importance to minimize stigma among MSM.
---
Public Health Implications of the Study
Stigma due to homophobia in society reduces access to services by men who have sex with men. Therefore, proper awareness of the public regarding non-discriminatory interaction with the MSM is of prime importance. It should be emphasized that same-sex behavior is not identified as a psychiatric condition by mental health professionals. Further, age-appropriate education, sensitization on HIV/AIDS, and their relationship to unprotected anal intercourse are essential to minimize stigma among MSM. STD
---
Data sharing statement
No additional data are available.
---
Ethical Approval
Ethics clearance was granted by the Ethics Review Committee of Faculty of Medicine, University of Kelaniya, Sri Lanka. Informed written consent was obtained from each patient prior to data collection..
---
Informed consent
Not applicable.
---
Conflict of interest
The authors declare no conflict of interest. |
Introduction: Only one-quarter of smokers in Pakistan attempt to quit smoking, and less than 3% are successful. In the absence of any literature from the country, this study aimed to explore factors motivating and strategies employed in successful smoking cessation attempts in Pakistan, a lower-middle-income country. Methods: A survey was carried out in Karachi, Pakistan, amongst adult (≥ 18 years) former smokers (individuals who had smoked ≥100 cigarettes in their lifetime but who had successfully quit smoking for > 1 month at the time of survey). Multivariable logistic regression, with number of quit attempts (single vs. multiple) as the dependent variable, was performed while adjusting for age, sex, monthly family income, years smoked, cigarettes/day before quitting, and having suffered from a smoking-related health problem. Results: Out of 330 former smokers, 50.3% quit successfully on their first attempt with 62.1% quitting "cold turkey". Only 10.9% used a cessation aid (most commonly nicotine replacement therapy: 8.2%). Motivations for quitting included self-health (74.5%), promptings by one's family (43%), and family's health (14.8%). Other social pressures included peer-pressure to quit smoking (31.2%) and social avoidance by non-smokers (22.7%). Successful smoking cessation on one's first attempt was associated with being married (OR: 4.47 [95% CI: 2.32-8.61]), employing an abrupt cessation mode of quitting (4.12 [2.48-6.84]), and telling oneself that one has the willpower to quit (1.68 [1.04-2.71]).In Pakistan, smoking cessation is motivated by concern for self-health and family's health, family's support, and social pressures. Our results lay a comprehensive foundation for the development of smokingcessation interventions tailored to the population of the country. | Introduction
Smoking, with an average of 7 million deaths per year, is currently the leading cause of preventable death in the world [1], and causes a significant burden of oral and other cancers [2]. Literature pertaining to smoking cessation has shown that around two thirds of cigarette smokers are interested in quitting, with more than 50% reporting making a quit attempt in the past year [3]. However, fewer than one third of smokers who tried to quit used proven cessation methods, with only one in 10 smokers being able to quit successfully [3]. A UK based study showed that one third of quitting attempts were not preplanned and around half of those were made without the use of any support and thus were less likely to be successful [4]. Documented and validated supportbased methods, and thus by extension a plan beforehand, contribute towards the success of any quit attempt [4].
To facilitate those with the intention to quit smoking, it is imperative to identify factors motivating successful cessation in former smokers and use these to support others' quit attempts [5]. Cessation-aid interventions that are designed according to specific motivations to quit smoking are likely to increase chances of successful cessation [6]. Factors motivating smoking cessation range from internal/individual factors (such as a smokers emotional state and willpower) and external factors (such as advice on why and how to quit from health professionals, environmental smoking restrictions, and expectations about the benefits of quitting) [7]. The importance of internal/individual factors must not be undermined, as they have been shown to affect the efficacy of smoking cessation programs [7,8].
While there is extensive literature exploring factors motivating smoking cessation amongst populations in developed countries [9], such research is scarce from lower-middle-income countries (LMICs) such as Pakistan. Around 19.1% of Pakistan's adult population are tobacco users, with the majority being smokers [10], and approximately 10% of deaths in Pakistan annually are attributable to smoking [11]. Apart from devastating consequences on population health, smoking also costs Pakistan approximately Rs. 192 billion (1.37 billion United States Dollars) annually due to costs associated with smoking-associated cancers, respiratory disease, and cardiovascular disease [12]. However, according to the World Health Organization (WHO) Global Adult Tobacco Survey (GATS), a much lower percentage (24.7%) of smokers in Pakistan make attempts to quit smoking, as compared to other countries (40-50%) [13,14]. In addition, the success rates of quit attempts are also lower for smokers in Pakistan (2.6%), as compared to those reported by international literature [13]. Almost half of the smokers attempting to quit did so without assistance (49.2%) and were hence less likely to be successful, with only 9.1% making use of pharmacotherapy and 14.7% of counseling [14]. The huge gap between the number of smokers attempting to quit and those actually successful highlights the ineffectiveness or absence of adequate motivators of smoking cessation and interventions designed to motivate and support successful cessation attempts in Pakistan [15]. The GATS survey also found that almost two-thirds (63.9%) of smokers were individuals without any education and around 59.8% were not interested in quitting [14]. This calls into question the benefit of mass media campaigns for smoking cessation, particularly those using a textual medium, in a country where the majority of smokers are illiterate [14]. In addition, since most smokers in Pakistan belong from lower socioeconomic backgrounds [16], cessation aids such as pharmacotherapy and counseling may be out of the financial reach of many individuals. Lastly, cultural and religious influences on smoking practices [17] may contribute to cessation patterns that differ from those seen in Western countries.
Although the 2014 GATS survey [14] provides highly generalizable national-level data regarding the sociodemographic distribution of ex-smokers and their use of cessation aids, it did not explore factors driving cessation itself. This gap in knowledge represents a niche that invites further research. Moreover, though the GATS survey reported that 29.7% of current smokers thought of quitting because of warning labels on cigarette packages [14], the impact of other public health interventions to promote cessation was largely unexplored. Thus, this study aims to describe factors motivating successful smoking cessation attempts in Pakistan, so that these may be incorporated towards the development of smoking cessation interventions that are targeted to the population of the country. In addition, this study also aims to identify motivators and strategies that are associated with successful cessation on one's first attempt. Lastly, our study also reports the perceived usefulness of public health interventions in motivating cessation and resisting relapse amongst ex-smokers.
---
Methods
---
Study setting and population
This cross-sectional survey was carried out in Karachi, Pakistan, after approval from the institutional review board at the Aga Khan University Hospital (AKUH). The target population for this survey was adult former smokers, who were defined as adult (≥ 18 years) individuals who had smoked at least 100 cigarettes in their lifetime but who had successfully quit smoking at the time of survey [18]. A quit attempt was defined as deliberately stopping smoking for > 1 week, while successful quitting was defined as having deliberately stopped smoking for > 1 month [18]. A quit attempt was categorized as unsuccessful if any smoking relapse (≥ 1 cigarette smoked) took place after a quit attempt.
---
Survey characteristics
Data was collected by means of a questionnaire that was available in both English and Urdu, the national language of Pakistan. In the absence of a prior questionnaire suitable for our population, a comprehensive questionnaire was developed using elements from various sources [9,19,20] in close association with faculty with expertise in tobacco cessation research at the Section of Pulmonary and Critical Care Medicine at AKUH and the University of York. Content validity was assessed by calculating a content validity index (CVI) for relevance and clarity based on the ratings of three subject experts and a biostatistician. A CVI for relevance of 0.92 and for clarity of 0.89 indicated good content validity for the tool. The English questionnaire was then translated to Urdu, which is the national language of Pakistan, by an independent translator fluent in both languages and with experience in questionnaire translation. To ensure face validity, the English and Urdu versions of the survey underwent pilot testing amongst 30 respondents, and any ambiguous questions were subsequently modified as appropriate. The final survey contained the following five sections: Demographics and Job Characteristics: Age, sex, marital status, and monthly family income. History of Smoking and Smoking Cessation: Age at starting smoking, duration of smoking, number of quit attempts, cigarettes/day before cessation, time since quitting, age at cessation, difficulty of cessation (5-point Likert Scale consisting of 5 = very difficult; 4 = difficult; 3 = neither difficult nor easy; 2 = easy; and 1 = very easy), and perceived self-efficacy in quitting (Question: "Do you believe you have quit definitively? Responses: Yes/ No/Unsure) [9]. Strategies Employed in Smoking Cessation: Mode of quitting used in successful attempt (abrupt cessation/cold turkey vs. gradual reduction), use of a cessation aid (checklist of different available cessation aids) [9], strategies for self-discipline (Yes/ No for each strategy using a checklist) [20], strategies for self-distraction from smoking (Yes/No for each strategy using a checklist), and positive reinforcement strategies (Yes/No for each strategy using a checklist) [20]. Factors Motivating Smoking Cessation: Major reasons for quitting smoking (Yes/No for each reason using a checklist), sources of awareness regarding need to quit smoking (Yes/No for each source using a checklist) [19], social factors motivating cessation (Yes/No for each factor using a checklist), factors related to self-image (Yes/No for each factor using a checklist) [20], and existence of smoking-related health problems [9].
---
Usefulness of Public Health Interventions in Aiding
Smoking Cessation: The helpfulness of public health interventions in motivating cessation and resisting relapse (multiple choice responses: not helpful at all, helpful to a small extent or helpful to a great extent).
The survey was preceded by a consent form (available in both English and Urdu) explaining the nature and scope of the survey. In addition, preliminary screening questions based on current smoking status ensured that current smokers or those who had quit for < 1 month were not allowed to proceed with answering the survey.
---
Sample size calculation
Since no published literature reports factors motivating smoking cessation in Pakistan, it was assumed that approximately 75% of former smokers will have quit for health purposes (to protect present or future health), which will be the most common reason. This figure is based on a study by Gallus et al. in 2013 that was conducted amongst 3075 former smokers in a European population [21]. The sample size required for our study was calculated using OpenEpi. Using a 95% confidence level, the minimum required sample size was determined to be 288 adult former smokers.
---
Sampling technique
In order to achieve a representative sample for this study, data collection was conducted on the premises of five tertiary care hospitals (three government-owned and two privately-owned) in Karachi, including AKUH. Nonprobability convenience sampling was used to recruit participants for the survey. Data collectors approached patients' attendants (persons accompanying patients) for participation in the survey. Individuals who had presented to the hospital for reasons pertaining to their own health were not considered for inclusion. Patients' attendants are assumed to be representative of the general population. After initially introducing the study and obtaining consent from the individual, the data collectors screened potential participants according to the inclusion criteria and exclusion criteria. If the individual were suitable for inclusion, an informed consent was obtained. A copy of the consent form was provided to the participant. Next, the data collectors verbally administered the survey in English or Urdu, according to the participant's preference.
---
Ethical considerations
To ensure privacy, the interaction of administering the survey took place in the nearest quiet location (empty room) on the hospital premises, according to the participant's comfort. Moreover, to maintain anonymity, the questionnaire did not record respondents' names. There were no risks, immediate benefits, or incentives for participation in the survey.
---
Statistical analysis
Statistical analysis was performed using IBM SPSS version 23. Continuous data was presented using mean and standard deviation/ median (interquartile range), and compared using independent sample t-tests/Mann Whitney tests, as appropriate. Categorical data was presented using frequencies and percentages, and compared using chi-squared tests/Fischer's Exact tests. Content validity indices (CVI) were calculated for clarity and relevance based on the ratings of three content experts and a biostatistician. Multivariable logistic regression, age, sex, monthly family income, years smoked, cigarettes/day before quitting, and having suffered from a smokingrelated health problem, was performed with number of quit attempts as the dependent variable (dichotomized as single attempt/successful on first attempt and multiple attempts/one or more unsuccessful attempts before a successful attempt). A p-value < 0.05 was considered statistically significant for all analyses.
---
Results
A total of 330 former smokers were included, with the majority male (92.7%) and aged between 18 and 30 years (43%) and 31-45 year (27.9%). Monthly family income was < Rs. 25,000 in 49.7% of respondents and > Rs. 75, 000 in 18.2%. The mean age at which respondents at started smoking was 18.05 years, while the mean age at successful quitting was 31.37 years. Around half of the respondents reported having successfully quit smoking in their first attempt (50.3%), while 17.9% reported > 6 quit attempts. Most respondents reported smoking < 10 cigarettes a day (68.2%) at the time they began their successful quit attempt (Table 1).
The majority of respondents reported that they had abruptly stopped smoking (quit "cold turkey"; 62.1%). However, only 36 (10.9%) of respondents reported using a cessation aid during their successful quit attempt. Nicotine replacement therapy was the most common cessation aid used (n = 27; 8.2%). Additionally, 3 (0.9%) respondents reported using mint gums, while only 2 (0.6%) reported using pharmacological cessation therapy and 1 (0.3%) reported having attended psychotherapy/ counselling sessions for smoking cessation. Respondents also reported avoiding social company that encouraged smoking (46.4%), as well as triggers that caused an urge to smoke (28.5%). The majority of respondents believed that they had quit smoking definitively (83.9%), although the majority felt that giving up smoking was very difficult/difficult (63.9%). Respondents reported using a variety of ways to discipline or distract themselves when they felt the urge to smoke, as well as various positive reinforcement strategies to aid cessation (Table 2). The most frequently reported reason for quitting smoking was to improve or protect one's own health (74.5%), which also served to justify our earlier estimate of 75% for sample size calculation. Other common reasons included promptings by one's family (43%), and to improve/protect the health of family members (14.8%). 38.8% of respondents reported suffering from a smoking-related health problem (38.8%). Common sources of awareness regarding the need to quit smoking included family/friends/colleagues (37.6%), doctors (24.8%) and social media/online platforms (20.6%). Certain social pressures to quit smoking, such as peerpressure to quit smoking (31.2%) and social avoidance by non-smokers (22.7%), were also reported. Respondents also reported having felt the need to give up smoking to be content with themselves (33.3%) and having felt upset whenever they felt the urge to smoke (30.9%). The various factors that encouraged smoking cessation are shown in Table 3.
The majority of respondents felt that anti-smoking public health interventions were not helpful at all. Consumer warnings on cigarette packs (4.5%), increased prices/taxes on cigarettes (4.5%), and smoke-free public recreational places (4.2%) were most commonly reported to be helpful to a great extent in motivating cessation. Similarly, increased prices/taxes on cigarettes (4.8%) and consumer warnings on cigarette packs (4.2%) were most frequently reported to be help to a great extent in resisting relapse (Table 4).
---
On multivariable logistic regression (
---
Discussion
This study was conducted to explore factors associated with successful smoking cessation in former smokers in Pakistan, a lower-middle-income country (LMIC) in South Asia. Our study identified personal health, promptings from one's family, and one's family's health, as the most important motivating factors. Social pressures to quit smoking included peer-pressure to quit and social avoidance by non-smokers. Lastly, successful cessation on one's first quit attempt was associated with being married, quitting cold turkey, having a negative self-image of oneself due to smoking, and having strong willpower to quit. The commonest reasons for quitting smoking were to improve/protect own health (74.5%), family's promptings (43%), to improve/protect the health of family members (14.8%), and to save money (14.5%). Respondents reported receiving awareness regarding the need to quit smoking most commonly from their family, friends, and colleagues (37.6%). Moreover, social pressures, such as peer-pressure to quit smoking (31.2%), social avoidance by non-smokers (22.7%), and non-smokers asserting rights to smokeless public spaces (9.1%), were also major deterrents. Studies from the United States, Poland and France have demonstrated similar results, with health concerns, discouragement of smoking at home, and the high cost of cigarettes being important deterrents [22][23][24]. In addition, social pressure, such as having a smokefree social network that pressurizes towards cessation, has also been found to be a strong motivator of cessation across different populations [23][24][25]. It is interesting that promptings by doctors were reported as being a reason for quitting by only 13% of respondents, and only one quarter (24.8%) of respondents received cessationrelated awareness from their doctors. A study from the United Kingdom revealed that most patients were skeptical about doctors smoking cessation advice, which was often generic and of a preaching nature, and suggested that doctors should practice a more personalized approach to cessation counseling [26].
Around half (50.3%) of the respondents in our study reported quitting successfully on their first attempt, while the remaining reported needing 2-5 attempts (31.8%) and > 6 attempts (17.9%). These findings are in great contrast with what is usually suggested by smoking cessation programs. These vary from 8 to 14 attempts, as suggested by The American Cancer Society, the Australian Cancer Council, and the Centers for Disease Control [27][28][29]. However, there is some literature that aligns with our findings, as it has been suggested that though the number of quit attempts may be quite high on average, between 40 and 52% may be successful on their first serious attempt [30,31].
On multivariable regression, successful cessation on first attempt was associated with being married, quitting cold turkey, having a negative self-image on oneself because of being a smoker, telling oneself they have the willpower to resist the urge to smoke and quit definitively, and consciously diverting one's thoughts to distract oneself from smoking. While the concept of willpower has been debated for a long time for its actual contribution to smoking cessation [32], it has been demonstrated to be an important factor in Pakistan previously [13]. Moreover, personal willpower is an essential feature of the "5A's" model in "Treating Tobacco Use and Dependence" [33], of which the first three A's build towards willingness to quit and the last two A's facilitate those willing to quit to take the final decision to quit. This concept of personal willpower being an important factor in single-attempt cessation is strengthened by how family's promptings as a major reason for cessation was negatively associated with single-attempt cessation in our study. This suggests how personal motivation that arises from within the individual is more likely to lead to successful cessation than when it arises externally. Additionally, quitting cold turkey has been recommended as more successful in smoking cessation, as compared to gradually tapering off cigarette use [34]. Interestingly in our study, use of a smoking cessation aid was negatively associated with quitting on the first attempt, a finding corroborated by a survey by Manis et al. in Switzerland [35]. With regards to self-image, while having a negative selfimage due to one's addiction may cause distress to the smoker [36], it can also function as a powerful motivator to quit smoking as it negates the perceived benefits of smoking [37]. Lastly, being with a spouse or partner who is a non-smoker, a former smoker, or who encourages and motivates quitting, is associated with a greater likelihood of success on cessation attempts [38][39][40].
Self-distraction by consciously diverting one's thoughts to other matters (37.3%), trying to keep one's hands and fingers occupied (34.5%), and engaging in work (28.8%), were useful strategies reportedly used by respondents. Moreover, consciously diverting one's thoughts to other matters was significantly associated with single-attempt cessation on multivariable regression. These are encouraging findings, as they are simple yet effective. More technological methods of distraction, such as mobile phone applications and games [41,42], that have been piloted in the setting of developed countries may not be feasible for a resource-constrained like Pakistan. In addition, positive reinforcement strategies, such as expecting rewards (23.6%) and receiving rewards (19.1%) from others for resisting the urge to smoke, were also employed by respondents. Rewards and incentives, often monetary, are helpful in motivating smoking cessation, especially when individualized [43,44].
Lastly, none of the public health interventions mentioned in our survey were perceived by respondents as particularly useful for helping smoking cessation or resisting relapse, with less than 5% of respondents rating any intervention as helpful to a great extent. This is in direct contrast with studies from developed countries, such as the United States [45,46], and may be explained by several reasons. Firstly, interventions such as government or private sector mass media anti-smoking campaigns, anti-smoking advertisements, and health warnings preceding/during films, may not effectively be effective amongst those of lower socioeconomic and less educated backgrounds. Secondly, although Pakistan subscribes to the MPOWER model of tobacco control outlined by the World Health Organization [47], it is possible that these interventions are not practically implemented in an optimal manner. Thirdly, since our results highlight how former smokers predominantly attribute the success of their cessation to personal factors, such as willpower, self-discipline, and distraction strategies, they are perhaps unable or hesitant to acknowledge the potentially subconscious impact of external motivators. Nevertheless, further studies are required to determine the efficacy of such large-scale public health interventions in the setting of a LMIC like Pakistan, in terms of both improving cessation and costeffectiveness.
Despite the major burden of tobacco consumption in the country, Pakistan lacks any major smoking cessation programs or clinics facilitating rehabilitation, which along with the low cost and easy availability of tobacco, can prove the difficult task of quitting even more challenging [13]. The results of our study provide a comprehensive and unique understanding of the factors that motivate smoking cessation in Pakistan. However, despite the varied distribution of socio-demographic characteristics achieved by targeting five different settings for data collection, the convenience sampling methodology used may limit the degree of generalizability of our findings to other populations in Pakistan. Nevertheless, our findings can help guide the development of evidence-based programs for smoking cessation in Pakistan and lay the foundation for similar larger-scale national research. Other potential limitations include the self-reported nature of our data as well as the possibility of social desirability bias. Future research must investigate motivators, strategies, and patterns specific to sex, age, socioeconomic status, education level, and other demographics.
---
Conclusion
Major motivations for smoking cessation in a Pakistani population include to protect the health of oneself or family members, and due to promptings from family members. Self-discipline, personal willpower, distraction strategies, and positive reinforcement play an important role in a population where smoking cessation aids may be inaccessible to many. Moreover, peer-pressure to quit and social exclusion also motivate smokers towards quitting, as does the negative self-image one associates with themselves because of their addiction to smoking. Lastly, most public health interventions, such as mass media campaigns and anti-tobacco advertisements, were not perceived as being helpful for motivating cessation.
---
Availability of data and materials
Saw the data is available from the authors on reasonable request, and is not available to be shared publicly due to constraints of the institutional review board at the Aga Khan University.
---
Authors' contributions RSM conceptualized and supervised the investigation, along with devising the methodology and analyzing the data. RSM was a major contributor in writing and editing the manuscript. MUJ conceptualized the investigation, along with devising the methodology and analyzing the data. MUJ was a major contributor in writing and editing the manuscript. MSK supervised the investigation, along with devising the methodology and analyzing the data. MSK was a major contributor in writing the manuscript. NA collected the data by verbally administering the survey. NA also contributed to analyzing the data and writing the manuscript. ZZF collected the data by verbally administering the survey. ZZF also contributed writing the manuscript. MU collected the data by verbally administering the survey. MU also contributed to analyzing the data. FS collected the data by verbally administering the survey and supervised the investigation. JAK supervised the investigation and contributed to editing the manuscript. The author(s) read and approved the final manuscript.
---
Declarations
---
Ethics approval and consent to participate
This study received ethical approval from the ethics review committee of the Aga Khan University (Reference Number: 2020-1394-8954). If the participant were suitable for inclusion, an informed consent was obtained. A copy of the consent form was provided to the participant. All methods in the study were carried out in accordance with the ethical principles outlined in the declaration of Helsinki (1964) and its subsequent amendments.
---
Consent for publication
---
Competing interests
None of the authors have any conflicts of interest to declare.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
This paper demonstrates the use of differentially private hyperlink-level engagement data for measuring ideologies of audiences for web domains, individual links, or aggregations thereof. We examine a simple metric for measuring this ideological position and assess the conditions under which the metric is robust to injected, privacy-preserving noise. This assessment provides insight into and constraints on the level of activity one should observe when applying this metric to privacy-protected data. Grounding this work is a massive dataset of social media engagement activity, provided by Facebook and the Social Science One (SS1) consortium, where privacy-preserving noise has been injected into the data prior to release. We validate our ideology measures in this dataset by comparing to similar work on sharing-based, homophily-and content-oriented measures, where we show consistently high correlation (> 0.87). We then apply this metric to individual links from six popular news domains and construct link-level distributions of audience ideology. We further show this estimator is robust to engagement types besides sharing, where domain-level audience-ideology assessments based on views and likes show no significant difference compared to sharing-based estimates. Estimates of partisanship, however, suggest the viewing audience is more moderate than the audiences who share and like these domains. Beyond providing thresholds on sufficient activity for measuring audience ideology and comparing three types of engagement, this analysis provides a blueprint for ensuring robustness of future work to differential privacy protections. | Introduction
Datasets of large-scale online behavior and digital traces are growing more sensitive as privacy expectations and regulations mature. To address such concerns, data providers are turning to differential privacy to balance large-scale data releases with maintaining privacy guarantees for individuals whose data may be included in these releases. Differential privacy techniques operate by injecting noise into observations to prevent identification of individuals in these datasets (see Wood et al. (2020) for an introduction to these methods). These protections come at a cost, however, as standard analyses may produce biased or erroneous results if they do not account for such protections (Evans et al. 2019).
This issue is particularly evident in the release of the "Facebook Privacy Protected Full URLs Dataset," referred to as the "Condor" dataset, where Facebook and the SS1 consortium have released a massive collection of 63.5 million links shared on the Facebook platform along with differentialprivacy-protected engagement data on age, gender, location, and political preference (Messing et al. 2020). Condor is the largest dataset of link-level engagement released to date and holds marked potential for studying large-scale online behaviors, but researchers lack guidance in and examples of methods that account for differential privacy protections.
This paper provides this guidance by 1) examining a simple weighted-average metric for calculating ideological positions of audiences for web domains1 based on link-sharing in Facebook in the presence of differential-privacy protections, 2) showing how differential privacy impacts this metric, 3) establishing bounds on how this metric should be used, and 4) validating this metric against similar domainlevel, sharing-based measures. While similar metrics have been proposed, those efforts rely on highly sensitive data, such as internal Facebook data (as in Bakshy, Messing, and Adamic (2015)) or Twitter profiles aligned with sensitive "voter-file" information (as in Robertson et al. (2018)); in contrast, this paper's metric can be calculated solely from this differentially private, public dataset.
After establishing constraints for our differential-privacyresilient metric, we use it to extract novel insights about individual hyperlinks, where sparsity issues have forced previous approaches to use domain-level measures. We then assess how different types of engagement-views and likesimpact our measures. For individual hyperlinks, we estimate distributions of link-level ideology measures for several thousand individual links across six popular domains, including YouTube.com, providing insight into long-standing questions about partisan audiences on that platform. For different types of engagement, we measure differences in domains' audience ideologies using link-sharing, viewing, and liking behaviors, also answering open questions about consistency in measurement across engagement types; results show no significant deviation in domain-level estimates across these activities-though a domain's viewing audience is on average more moderate than its sharing or liking audi-ences. Given the commercial value of viewing data in online platforms, this result is particularly encouraging for the generalizability of share-based studies and for future efforts that leverage protected versions of this sensitive data. This work's core contributions are:
• A demonstration of how a simple metric for estimating ideology of a domain's audience can be made robust to differential privacy protections; • An examination of link-level distributions of ideology across six major news sources; and • New insight into how varied forms of engagement (shares, views, and likes) impact audience ideology estimates.
---
Related Work
This paper engages with two main communities: First, a large body of research exists on inferring ideology of audiences in studies of media bias and polarization, especially in online spaces (Gentzkow and Shapiro 2010;Bakshy, Messing, and Adamic 2015;Budak, Goel, and Rao 2016;Robertson et al. 2018), which both motivates this work and provides sources for validation. Second, much of the data that could be useful for similar studies is often restricted and sensitive; recent work has explored methods for providing data protections of such sensitive data while still enabling inference on this data (D'Orazio, Honaker, and King 2015), which directly informs our work. Before describing contributions to these communities, we first provide an overview of differential privacy to situate this work.
---
A Brief Primer on Differential Privacy
At its core, differential privacy is an approach to collecting and disseminating aggregate statistics in a way that guarantees some level of privacy for individuals whose data is used to generate these statistics. While Wood et al. (2020) provides an overview of differential privacy for non-technical audiences, such protections generally provide a form of plausible deniability for individuals whose data is included in these statistics. This deniability comes from the property that a third-party cannot learn anything about a single individual whose data contributes to these statistics that could not be learned if that individual's data were excluded.
Hence, an individual could claim their data was never included in the released statistics at all, allowing them to deny potential allegations derived from the data. Digital trace data can therefore benefit from applications of differential privacy, as large-scale, aggregate datasets can be released in privacy-protected forms that reduce potential harm to the populations from which the data is collected D 'Orazio, Honaker, and King (2015). These protections are also consistent with calls for and regulations on enhanced protections of digital consumer data, and groups like the US Census Bureau are using similar ways to protect sensitive data. These protections are generally applied by adding noise into the computations of aggregate statistics. Researchers can tune characteristics of this noise to quantify potential privacy loss, and by tracking this loss over subsequent dataset releases, those creating these datasets can maintain privacy guarantees. Characteristics of this noise can be shared as part of the release process without risking these privacy guarantees, so researchers can account for this noise without identifying individuals within the dataset. In the context of the Condor dataset, Facebook bounds privacy loss by adding noise to the aggregated engagement statistics prior to releasing the dataset to external researchers. That is, if a hyperlink has been shared X times in a given month, Facebook adds noise ϵ drawn from a Gaussian distribution to this value, and external researchers only ever see X + ϵ.
---
Measuring Political Ideology
Methods for estimating ideology, partisan lean, media slant, or similar aspects of information sources (e.g., newspapers, websites, or communities) are well-studied and generally fall into one of two categories: content-or homophily-based approaches. Content-based approaches generally analyze language, while homophily-based methods propagate individuals' ideological preferences to the information sources these individuals share or consume. Content-based analyses like the study of media slant in Gentzkow and Shapiro (2010) or media bias in Budak, Goel, and Rao (2016) are powerful but require content analysis and either manual assessment (as in Budak, Goel, and Rao 2016) or information about political preferences of people sharing that content to learn mappings of particular language to political preference (as in Gentzkow and Shapiro 2010). In contrast, while homophily-based methods need information about political preferences, they do not require analysis of actual content and instead rely on interactions among nodes in a network. Through these interactions, one can propagate political preferences to neighboring nodes, making these methods particularly amenable to algorithmic assessment. In online social networks, such interactions are often computationally cheap to collect through APIs or found data, making these approaches popular in research. Homophily-based methods for inferring political ideology have been used to measure online/offline ideological segregation (Gentzkow and Shapiro 2011), diversity in online news (Bakshy, Messing, and Adamic 2015), ideological biases in search engines (Robertson et al. 2018), and even political lean of disinformation agents (Golovchenko et al. 2020).
Despite the clear utility and popularity of such homophily-based approaches, when these methods use social media data to measure ideology of a news source's or web domain's audience -as in Robertson et al. (2018), Golovchenko et al. (2020), Eady et al. (2020) and othersthey commonly rely on easily collectable sharing behavior (e.g., an individual shares a tweet with a link to a domain). While sharing behavior is easy to collect from sources like Twitter and Reddit, prior research on social media spaces and online communities shows that the vast majority of users on the platform do not actively share or produce content (Nonnecke and Preece 2000; Preece, Nonnecke, and Andrews 2004;Gong, Lim, andZhu 2015) -Benevenuto et al. (2009) in particular suggests that 92% of all behavior in one social network was comprised of content viewing alone, which does not produce collectable artifacts in many public APIs. Data about these viewing behaviors, however, is commercially sensitive, and most social media platforms do not make this data publicly available. For studies of viewing behavior in Facebook, for example, up to the release of SS1's Condor dataset, one has had to rely on researchers employed by Facebook, as in Bakshy, Messing, and Adamic (2015), or partner with researchers at Facebook. This reliance on content production and sharing leads to a problematic implication: A media source's ideological slant is significantly affected by the source's audience, and as share-based metrics omit activity from a significant portion of the viewing audience, measures of the media source's audience may differ significantly from the true distribution Gentzkow and Shapiro (2010). A unique aspect of the Condor dataset provided by SS1, however, is that it provides engagement data across both sharing and viewing behaviors, binned across several political-preference buckets. Hence, the work in this paper can shed some needed light on the differences in estimates based on shares versus views. That is, by first comparing results from our share-based audience ideology metric to existing work in this area, we can validate our metric despite the privacy-protecting noise injected into Condor observations. Then, by comparing results from our share-based estimator to estimators based on views -and indeed other behavior, such as Facebook's "Like" reaction, which is similar to Twitter's "Favorite" affordance -we can evaluate whether differences in share-and view-based estimates differ significantly.
---
Inference and Protected Data
While the above context on measures of media bias and audience ideology show a clear need for understanding the impacts of share-versus-view-based metrics, as mentioned, view data is both commercially sensitive and highly private. Facebook has endeavored to help researchers in this need by releasing the Condor dataset and protecting it with differentially private noise, as described in Messing et al. (2020). While works such as D 'Orazio, Honaker, and King (2015) and Evans and King (2022) outline how differential privacy can support inference in social sciences, how these protections impact on researcher utility remains an open question. Evans and King (2022) even shows ignoring differential privacy can lead to unpredictable biases in results, including biasing estimated effects towards zero, or in some particularly problematic cases, inverting the sign of the estimates. Despite these risks, Evans and King (2022) shows corrections are feasible in certain scenarios, as the noise added to data for establishing differential privacy guarantees is equivalent to increasing standard measurement error, and for linear systems (e.g., linear regression models), one can correct for noise if details of the noise-generating distribution are known. For non-linear systems like the weighted-average metric we present, however, analytically based corrections are not readily available.
We instead build on Evans and King (2021), which outlines the context in which bias in a ratio metric can be bounded. Evans and King (2021) claims that, if the noise introduced is generally much smaller than true observations, bias in the noisy metric is minimal. While this result is valu-able, no guidance is provided regarding how large observations should be relative to noise nor how to evaluate whether one is in this regime. Hence, this paper provides this muchneeded guidance for using these noisy observations in the Condor dataset to study media and ideology in a robust manner. We further validate these methods against extant results where such privacy protections are not in place.
Condor: The Facebook Privacy Protected Full URLs Dataset
As a brief overview, the "Privacy Protected Full URLs" dataset, provided to academic researchers by Facebook and the SS1 consortium, is a large-scale collection of URLs and associated engagement data for 63,574,836 hyperlinks shared on the Facebook platform. This dataset exists to provide researchers new insight into how individuals engage with hyperlinks on the Facebook platform while simultaneously maintaining strong privacy guarantees for Facebook users. For a URL to be included in this dataset, it must have been publicly shared by approximately 100 unique individuals (see Messing et al. (2020) for more details). As of this writing, the dataset is on its ninth iteration and contains monthly engagement metrics for all months between 1 January 2017 and 31 December 2021.
For each of these URLs, the dataset contains counts for 11 actions one can take on the Facebook platform (sharing, viewing, liking, commenting, clicking, etc.), broken down by month and audience demographics. These demographics cover an individual's country, age, and gender, from one of 45 countries, seven age groups, and three gender groups (Messing et al. 2020). In the US, Condor further decomposes these counts across six bins representing individuallevel political preference, using a "political page affinity" (PPA) metric, a homophily-based measure defined by Barberá et al. (2015) and described in Messing et al. (2020). PPA measures an individual's political ideology on a scale b ∈ {-2, -1, 0, +1, +2} (-2 is very liberal, and +2 is very conservative), with an additional bin for individuals whose PPA is unknown-we exclude this sixth group from our analyses. In this manner, the Condor dataset contains makes available highly sensitive but valuable engagement data for large volumes of online information sharing and consumption behavior.
Given the sensitivity of this data and to protect users of the Facebook platform from potential de-identification, researchers using the Condor dataset do not have direct access to the raw monthly counts of these activities. Instead, researchers can only observe counts of these activities after Facebook has added zero-centered Gaussian noise to them in accordance with zero-Concentrated Differential Privacy (Bun and Steinke 2016). By controlling the amount of noise relative to the amount of engagement across these demographic bins, Condor provides privacy guarantees about the probability of an individual person's single action (e.g., share, view, like, etc.) being attributed to that person. That is, more noise can be injected into counts of views compared to counts of shares or clicks, while noise added across a single action comes from the same normal distribution. Table 1: Summary of notation in audience-level ideology estimates. x denotes estimates of x using differentially private data.
For this work, we focus on URLs shared primarily in the US. All access to this data is allowed through the SS1 approval process and is conducted on the Facebook Open Research and Transparency platform.
---
A Robust Metric for Audience Ideology
We now turn to a metric for estimating the ideology of a domain's audience from this dataset. Prior work on media slant has shown audiences' political preferences have a marked relationship with the message, topic selection, and framing of news sources (Budak, Goel, and Rao 2016;Gentzkow and Shapiro 2010;Bakshy, Messing, and Adamic 2015). In this context, Bakshy, Messing, and Adamic (2015) propose a homophily-based metric of the degree to which a news article is aligned with a partisan audience "by averaging the ideological affiliation of each user who shared the article." In the differential-privacy-protected Condor dataset, we can replicate this metric at the web domain level by measuring the average political ideology of the individuals who share content from this domain. Table 1 summarizes the notation we use in defining our version of this metric. While Condor includes engagement metrics, we focus on sharing for consistent comparison with other work on ideology estimation.
As Condor provides engagement data for each PPA bin, we can interpret these counts as the frequencies for which an individual who is very liberal (PPA= -2), liberal (PPA= -1), etc. has engaged with this content. To estimate a domain's audience ideology ζ D from this PPA metric, we calculate the weighted average across these five PPA bins, omitting the sixth PPA bin that contains audience engagement with unknown ideological affinity. Eq. 1 shows this metric as the product of each PPA value with the count of individuals who have shared that domain and have that PPA value. In Eq. 1, s b (D) represents the number of individuals with the PPA value b who have shared the domain D. In Condor, however, engagement frequencies are at the URL/hyperlink ℓ level, not the domain level, so we must first aggregate s b (D) over all links ℓ in domain D, as shown in Eq. 2.
ζ D = 1 b s b (D) b∈[-2..+2] b • s b (D) (1) s b (D) = ℓ∈D s b (ℓ)(2)
In the absence of differentially private data protections, Eq. 1 is fundamentally the same metric as in Bakshy, Messing, and Adamic (2015) and is similar to Robertson et al. (2018). With the introduction of zero-centered Gaussian noise, however, these metrics are ill-behaved and can result in discontinuities, as we explain below.
---
How Noise Impacts this Metric Analytically
While the metric ζ D in Eq. 1 is a straightforward calculation, differential privacy protections preclude observing the actual number of shares for a given PPA value directly. Instead, we observe a noised version of this value, shown in Eq. 3, where ϵ s,b is drawn from a zero-centered Gaussian distribution with standard deviation σ. This σ is constant for a single action (e.g., sharing) and reported in the Condor codebook (Messing et al. 2020). Hence, when calculating the number of individuals in PPA bin b who have shared a domain D, we can only construct a noisy estimate of this quantity s b (D) (Eq. 4). Substituting this value into Eq. 1 yields a noisy estimate of domain-level ideology ζ D , as shown in Eq. 7.
s b (ℓ) = s b (ℓ) + ϵ s,b (3) s b (D) = ℓ∈D s b (ℓ) (4) = ℓ∈D (s b (ℓ) + ϵ s,b (ℓ)) (5) = s b (D) + ϵ s,b (D) (6) ζ D = 1 b s b (D) b∈[-2..+2] b • s b (D)(7)
Critically, the ratio in Eq. 7 is ill-behaved when the magnitudes of actual shares s b (D) and the noise ϵ s,b (D) are similar. In such cases, because the Gaussian noise is zerocentered and can be negative, the denominator can approach zero, which inflates the metric (examples of this behavior are shown in the section below on link-level estimates). This scenario can also lead to pathological cases in which denominator is exactly zero (i.e., the noise exactly cancels the number of shares), resulting in discontinuities in the ideology estimate. Given the number of URLs in the dataset, these rare cases occur sufficiently often as to be problematic.
Analytically, Eq. 7 can be viewed as a ratio of Gaussian distributions, but these pathological cases results in this ratio having a Cauchy distribution, which has an undefined expected value. It is consequently difficult to isolate and correct for bias introduced by differentially private noise analytically. Fortunately, other work has examined this bias, and we rely on Hayya, Armstrong, and Gressis (1975), Evans et al. (2019), and Evans and King (2021) for their discussion of weighted averages in the face of noise.
In particular, if counts are normally distributed, we could treat this instance as a ratio of correlated, non-central normal distributions and use the result from Hayya, Armstrong, and Gressis (1975), to find the expected value of this ratio. In that case, as long as mean of the denominator's distribution is sufficiently large compared to the mean of numerator, bias in this expectation goes to zero. While we cannot assume normally distributed counts in this dataset (see Papakyriakopoulos, Serrano, and Hegelich (2020) for a discussion of log-normal distributions in social media engagement data), accounting for bias when the denominator is sufficiently large is supported by Evans and King (2021). Evans and King (2021) relies on Taylor approximation to expand a ratio of noised observations, leading to an upper bound on potential bias in this estimate, shown in Eq. 8, following from Eq. 2 of Evans and King (2021) where we replace K with the number of PPA bins. Specifically, as long as the number of shares s b (D) is sufficiently greater than the variance of the noise added, Eq. 8 goes to zero. Restated, as long as s b (D) >> σ s , or equivalently, s b (D)/σ s >> 1, bias in this metric should be negligible. Borrowing from signal processing, we refer to this ratio of engagement to noise as the signal-to-noise ratio (SNR), defined in 9.
bias < 4 • σ 2 ( b s b (D)) 2 (8) SN R = ( b s b (D)) 2 σ 2 (9)
---
Impacts of Noise via Simulation
As we show above, for a sufficiently high SNR, bias in our metric is negligible, but that analysis does not tell us what a sufficient-SNR regime might be. We thus turn to simulation to evaluate potential bias in the environment specific to the Condor dataset and construct two experiments: First, we evaluate whether the environment observed for popular domains in the Condor dataset are sufficient for our metric to be unbiased. Second, we examine the relationship between SNR and bias to get a sense for what levels of SNR and observed sharing are necessary to produce tight estimates of political ideology.
Estimating Bias for Popular Domains In the first simulation experiment, we test the hypothesis that ζ D -ζ D = 0, or whether the noise in Condor drives a significant difference in our estimates of ideology. We perform this analysis after observing engagement data for the top 1% most shared domains in the Condor dataset, which we select under the expectation that these popular domains achieve the necessary share-to-noise ratio. In the alternative case, i.e., that these domains do not have sufficient shares to be in the high-SNR regime, the noise added to this dataset may overwhelm any useful signal. At a high level, each run in the simulation starts by drawing ideology scores ζ D for a given number of domain observations n obs . For each domain, we sample a count of the links to this domain u D from a log-normal distribution and sample link-level ideology estimates for each link ζ ℓ , according to a normal distribution centered at ζ D for this domain. We then sample the number of shares for each link s(ℓ) in this domain, also from a log-normal distribution, and distribute these shares across the five PPA bins according to ζ ℓ . This process yields a collection of domains with associated links and shares across PPA bins for each link, mirroring the Facebook collection prior to noise injection.
To simulate the noise-generate process, we add noise to each link's simulated shares s b (ℓ) using the exact process outlined in the Condor codebook. We aggregate these linklevel shares up to the domain level, and estimate ideology ζ D from these noisy observations. Comparing this estimate from noisy sharing counts to the actual ideology yields an estimate of the bias added by the differentially private noise. Parameters for this simulation come from qualitative assessment of the Condor dataset and are shown in Table 2.
We then run the simulation with n sim = 100 iterations, sampling n obs = 100 domains per iteration, and calculate the mean bias ζ D -ζ D and Monte-Carlo standard error for each iteration. Simulation results produce an estimated bias of -4.006 × 10 -5 with a Monte-Carlo standard error of 5.4940 × 10 -5 . Variance across simulation iterations is also small, at 3.0485 × 10 -7 . These results shows the proposed estimator's bias is neither statistically significant nor is this difference practically significant on the [-2, 2] scale of PPA.
A Note on Aggregation One may be tempted to first estimate ideology by calculating ζ ℓ at the link level and taking the mean across all links in domain D to calculate ζ D , which is more consistent with the metric provided in Bakshy, Messing, and Adamic (2015). This approach produces a much higher MC standard error in this case, however, as the sharing signal in a given link is generally much lower compared to the additive noise than in the aggregate. We provide guidance on when such link-level estimates are reasonable in a later section.
---
Relationships between SNR and Bias
In the second simulation experiment, we examine the relationship between SNR and variance in our ideology estimator. This experiment fixes the number of links a domain has and varies the number of shares necessary to achieve a target SNR, defined in Eq. 9. We run this experiment with two fixed values for the number of links in a domain, first setting u D = 1024 to evaluate SNR for domain-level aggregates and then setting u D = 1 for cases where researchers want to study a single link. Varying SNR on the interval [1,1024] shows that an SN R ≥ 16 results in tight estimates on ideology, regardless of whether we aggregate over many or few links. These metrics derive from n sim = 500 runs of n obs = 100, 000 domains with uniformly distributed ideologies for each SNR.
---
Parameter Description
ζD Audience-level ideology of a domain D, drawn from a three-component Gaussian mixture model. ζ ℓ The mean political page affinity for a given link ℓ ∈ D, drawn from N (ζD, 0.25), with σ = 0.5 to provide separation between bins. uD The number of hyperlinks to domain D for which we have sharing data, drawn from a log-normal distribution LN (9, 1), as estimated from the Condor dataset. s(ℓ) The number of shares for link ℓ, drawn from a log-normal distribution LN (7, 1), as observed within the Condor dataset. s b (ℓ) The number of shares in political page affinity bin b for link ℓ, which we allocate from s(ℓ) by drawing 100 samples from N (ζ ℓ , 0.5625) and scaling up. σ = 0.75 is chosen so most mass is ±1.5 from the mean. ϵ b (D) Noise added in to sharing in political page affinity bin b for domain D, drawn from N (0, σ2 • 3 • 7 • 36) (σ is taken from the Condor codebook, and the multiplicative factors account for the aggregations along demographic and temporal bins; i.e., gender, age, and month). A remaining question concerns the level of observed sharing needed for tight estimates. To this end, we have explored the maximal number of observed sharing (i.e., noised share counts), averaged over several link-sharing counts and privacy-protecting noise levels, necessary for tight bounds. This exploration suggests the relationship between sufficient observed sharing and aggregate noise (i.e., noise accumulated from differential privacy protections and aggregations over multiple links, demographic bins, and temporal timeframes) is linear in log-log space for a fixed SNR. Using this framework, we then estimate the relationship between this noise and a target observed sharing volume needed for tight estimates at SN R = 16 (sufficiently high to ensure tight estimates). This model is shown in Eq. 10, where 16 is the SNR; 5, 3, and 7 are number of PPA bins, gender bins, and age bins respectively; m is the number of months the aggregation covers; u D is the number of links over which one is aggregating; and σ dp is the noise added for differential privacy. This equation for s(D), or the amount of observed sharing, allows us to set a lower bound on the volume of observed sharing necessary for stable estimates. This model also allows us to vary injected noise σ dp , meaning we can estimate the minimum quantity of engagement one should observe for other types of activity in Condor as well.
s(D) ≳ 1.578 • (16 • 5 • 3 • 7 • m • u D • σ 2 dp )(10)
---
Audience Ideology for Popular Domains
We now use this metric and bounds on sharing to examine the distribution of audience ideologies among Facebook's top 1% most popular US domains among politically engaged Facebook users. This analysis covers 2,629 domains out of a possible 2,644 as 15 domains were excluded for having insufficient activity to achieve the target SNR relative to the number of links u D .These excluded domains include SoundCloud.com and ReverbNation.com, which both have high numbers of unique links in Condor, leading to high differences between observed and needed sharing, making their estimates suspect. Figure 1 presents distributions of our estimated domainlevel ideologies, with a selection of domain annotations, divided into news (1a) and non-news domains (1b). For news domains, we select domains that have ratings from the NewsGuard 2 trust rating service and exist in our top-1% set, resulting in 1,279 news-oriented domains. This collection shows a tri-modal distribution, with traditionally liberal news sources (e.g., CNN, the New York Times, and Huffington Post) on the left, conservative news on the right (e.g., Fox News, Breitbart), and more centrist reporting such as C-SPAN around the center. For non-news sites, we see many centrally oriented domains are primarily shopping, social networking, crowd-funding, and sports sites, whereas nonnews domains in the ideological extremes are primarily activist organizations (e.g., the Southern Poverty Law Center splcenter.org or the National Rifle Association's Institute of Legislative Action nraila.org).
As our metric captures ideological leans of a domain's audience, in the context of news sources, Gentzkow and Shapiro (2010) suggests this measure should be highly correlated with the "slant" of these sources (which we indeed see in Figure 2d in the following section). For national news like the New York Times, Breitbart, etc., these sources are consistent with traditionally accepted partisan placement (e.g., as in Media Bias Fact Check). At the local level, we find local-affiliate news stations (e.g., WTOP in Washington, DC or KATC in Lafayette, Louisiana) are aligned with more ideologically moderate audiences, with KQED in Berkeley, CA having the most liberal and partisan audience of the local affiliates; the national media outlets, on the other hand, are more well-separated. This alignment among local affiliates is consistent with the literature (Bakshy, Messing, and Adamic 2015;Gentzkow and Shapiro 2010), as Berkeley, CA leaned heavily liberal in the 2016 presidential election, and Lafayette, LA leaned heavily conservative (Dottle 2019). The data also suggests, as seen in other work (Jurkowitz et al. 2020), a wider gap between the moderate and conservative components compared to the moderate and liberal components, suggesting the conservative media sites are more insulated from mainstream media.
---
Comparisons to Other Ideology Measures
To validate the audience ideology metric calculated from differentially private sharing data, we compare our results (2018), which uses ratios of shares from registered Republicans' and Democrats' Twitter accounts; we find a significant Pearson's correlation ρ = 0.9295 here for 1,675 domains (see Figure 2a). Our second comparison is with a homophily-based measure introduced by Eady et al. ( 2020) (Figure 2b), where we achieve a strong correlation ρ = 0.9386 across 154 domains. Our third comparison is with the similarly defined ideology scores introduced in Bakshy, Messing, and Adamic (2015), where we find the highest correlation (ρ = 0.9522) for 112 domains. Lastly, we compare against a content-based media slant estimate for 16 newspapers analyzed in Budak, Goel, and Rao (2016), where we find our lowest but still strong correlation (ρ = 0.8675).
Despite the complexity introduced by differential privacy protections in Condor, our estimates are strongly correlated across all four of these comparison.
---
Beyond the Most Popular Domains
Above, we focus on the top-%1 of domains, as these domains are more likely to exceed the threshold established in Eq. 10 and because these domains are well-captured in other works on audience ideology. Our method is not restricted to only popular domains, however, as domains that are shared less often may still have sufficient signal to exceed our threshold: e.g., using Eq. 10, if we observe a domain with a single link and use only one month of data, that domain need only have about 906 observed shares (i.e., shares with added noise) to provide stable estimates. As the most recent iteration of the Condor dataset contains 363,738 domains over five years, one may then ask how many of these similarly exceed the thresholds we establish for this stability. This quantity is also important for the creators of the Condor dataset, as it can shed light on the tradeoff between differential privacy protections and data utility in downstream analysis.
To answer this question, we randomly sample 1,024 domains from the Condor dataset and measure the proportion that exceed the threshold of observed shares in Eq. 10. For all domains, we use the same σ 2 dp = 14 value and set m and u D to the number of months for which the domain has data in Condor and the number of unique hyperlinks to that domain, respectively. Of these 1,024 domains, 43 exceed this threshold, accounting for the top 4.2% of the domains in this set. In comparison, over 99.4% of the top-1% of domains exceed this threshold. While this proportion is low, that still leaves in excess of 15 thousand domains that will produce stable ideology estimates using the method described above.
This result also has an important implication for the Condor dataset's construction more generally. While we note a couple of ways one might increase this proportion through relaxing constraints or focusing on link-level estimates (as we do in the following section), it is also true that the application of differentially private noise to the Condor dataset is done with limited insight into the downstream impact this noise has on analyses. Hence, this finding motivates a call to Facebook to revisit its privacy budget and investigate the balance between adding noise and reducing utility of this large dataset.
---
Link-Level Audience Ideology Estimates
In the preceding section, we have focused on demonstrating the validity of the audience-ideology metric by showing the bounds in SNR for which ideology estimates are tight and comparing our domain-level metrics to several extant, non-differential-privacy-protected datasets. This metric is not specific to domain-level metrics, however, and is equally amenable to estimating audience ideology at the individual link level, as in Bakshy, Messing, and Adamic (2015). This link-level analysis is a major advantage of the Condor dataset as the scale at which it provides these engagement metrics alleviates sparsity issues, which are a major barrier to link-level analyses in other sources. That is, the Condor dataset provides a much larger volume of linklevel data than academic researchers are generally able to access, allowing for novel insights into ideological distribu- tions across individual links rather than domain-level aggregates. That said, noise added to engagement metrics for individual links may be relatively high compared to domainlevel aggregates, as many individual links likely do not receive sufficient engagement alone to support ideology estimates using our proposed metric.
To illustrate this point, we estimate audience ideologies for individual links to six popular domains across the ideological spectrum. Of these domains, five are news sources, which we order from left to right-liberal to conservative: Huff Post, the New York Times, C-SPAN, Fox News, and Breitbart. Traditionally, Huff Post and the New York Times are considered left-leaning sources, whereas Fox News and Breitbart are right-and far-right sources; in contrast, C-SPAN is a non-profit, non-partisan source that primarily covers the US House of Representatives. We also include YouTube as our sixth domain, given its substantial role in the online news ecosystem. For each of these domains, we calculate audience estimates using the ninth iteration of the Condor dataset, covering 2017-2021 and show link-level distributions for all links in each domain (Figure 3a) and only for those links with sufficient engagement (Figure 3b) as estimated by Eq. 10; i.e., s(ℓ) > 7, 014. Tables 3a and3b show summaries of these figures as well. Distributions of audience ideology when we use all links from each domain consistently show extreme variation, with YouTube showing the widest range, from -5,760.0 to 5,167. In contrast, focusing only on links with with SNR > 16 yield more stable averages and more informative distributions. Interestingly, links to YouTube videos appear symmetrically distributed in their audience ideologies, and C-SPAN shows wider variation (though still constrained to between [-1.9, 2.2]). The remaining four domains, traditionally considered partisanleaning, exhibit the expected partisan distributions, with the majority of links falling on one side of the ideological spectrum. Some links from these domains do cross the ideological divide though; e.g., 1,169 of the links in Figure 3b from the New York Times have an audience ideology measure ζ ℓ > 0. One article in particular, "I Wanted to Be a Good Mom. So I Got a Gun," was shared by a solidly right-leaning audience (ζ ℓ = 2.2109). Table 3: Summary statistics for link-level audience ideology for six major domains. Consistent with Figure 3, ideology measures using all links (a) exhibits high variance, potentially masking useful structure, which emerges when we constrain links to those that are sufficiently popular (b).
---
Comparing Shares, Likes, and Views
Prior sections use sharing as the primary mode of engagement, so we can compare against similar methods on Facebook (e.g., Bakshy, Messing, and Adamic 2015) and Twitter (e.g., Robertson et al. 2018;Eady et al. 2020). Concerns with these measures include 1) sharing as a proxy for viewership and 2) counter-attitudinal sharing, wherein an individual shares a particular article to criticize it. Often, exposure to information is more important than who is sharing information, but sharing activity is more readily available, so it is used in place of exposure. Similarly, while prior work shows criticism is one of the primary motivations to share content in political discourse (Kim, Jones-Jang, and Kenski 2020), counter-attitudinal sharing is relatively rare (An, Quercia, and Crowcroft 2014), so it is generally ignored. The Condor dataset is not limited to sharing though, as it contains mea-2015-what-about-disney/ sures of likes, views, and other activity, though the added noise varies for these actions (e.g., σ = 10 and σ = 2, 228 for likes and views, respectively). We can therefore compare whether the population sharing a particular domain is significantly different from the population who likes or views this content, potentially mitigating concerns around share-based measures. We thus compute audience ideology metrics for 2,227 domains across three engagement typessharing, viewing, and liking-and compare them in Figure 4. This figure demonstrates limited statistically significant differences in ideological distributions exist among share-, view-, and like-based measures -supported by a one-way ANOVA test (F (2, 2225) = 1.899, p = 0.1498). That is, in comparing the distributions of inferred domain-level ideology metrics using the three activity types, we see no significant differences in these audience-ideology values. Correlation among all three metrics is also very strong (> 0.97), suggesting differences based on engagement are less impactful than the cross-method analysis shown in Figure 2. Hence, despite potential concerns around share-based measures as proxies for exposure and counter-attitudinal sharing, the ideological alignment of audiences sharing, viewing, and liking these domains are statistically indistinguishable.
Separate from these concerns, one may expect differences in the extremes of these ideology distributions, as political sharing on Facebook is relatively rare (Bakshy, Messing, and Adamic 2015). We therefore test an alternative measure by taking the absolute value of the ideology metric |ζ D |, to measure potential partisanship rather than liberal/conservative ideology. Comparing this partisanship metric based on shares, views, and likes (Figure 4b) shows a more significant difference among these three distributions (F (2, 2225) = 17.95, p << 0.001). A post-hoc Tukey test shows a moderating effect in viewership in that a domain's viewing audience appears significantly more moderate than its sharing and liking audiences (p < 0.001).
A Note on Views We note that the interpretation of "views" in Condor does not directly capture the volume of individuals who have visited and viewed a given URL outside of the Facebook platform. Instead, the Condor codebook defines "views" as the "number of users who viewed a post containing the URL" (Messing et al. 2020). That is, an individual may view a link outside the Facebook platform, and this view would not be captured in Condor's "view" count. Conversely, an individual may view a post in Facebook containing a specific link without visiting the link, and this interaction would be captured in Condor's "view" count-actually visiting the link is captured by the "click" count. While this interpretation omits off-platform engagement, Condor's operationalization of "views" still captures important network-driven exposure aspects that other work largely is forced to omit, given the commercial sensitivity of this measure. Hence, the result above should be interpreted as: The audience exposed to a particular domain within Facebook is significantly more moderate the audience that shares and likes that domain.
---
Threats to Validity
Though the Condor dataset is a milestone in the availability and transparency of social media data, concerns remain around how such data is collected and protected. First, the process by which the Condor dataset is constructed is opaque to researchers in that parties outside of Facebook are not allowed to inspect the code used to select URLs or calculate the metrics like PPA. As a result, researchers are forced to trust that Facebook's URL selection process is correct. Likewise, researchers are given limited insight into how much data is omitted from the Condor dataset because the links do not meet the 100-unique-user threshold on public shares. While internal Facebook developers have made some data available about this threshold's relation to the distribution of on-platform links, external review of the data preand post-application of the privacy-protecting noise remains unavailable. This latter issue is of particular concern as, in the fall of 2021, external researchers identified a flaw in the Condor dataset that significantly undercounted engagement in the US (Alba 2021). This flaw led to Condor's omission of engagement from US users whose political preferences (i.e., PPA bin) could not be identified; that is, while other demographic bins could be null to capture shares from, say, individuals with an unknown gender, no data existed in the dataset for the many users who did not follow sufficient political pages to have an identifiable PPA value. Though this paper was unaffected by this error (as we ignore shares from null-PPA users), and Facebook has since corrected this issue, the lack of transparency around Condor's creation and population remains a problem.
Second, while the privacy protections applied to the Condor dataset serve a crucial purpose, how these protections impact research methods remains an open question. Our proposed metric requires a sufficient level of activity to overcome additive noise, which means many important but rare phenomena may be masked. Consequently, domains and links shared among extreme partisan audiences may be included in the dataset but have insufficient signal for useful analyses. More worryingly, this masking could be asymmetric and result in ideological bias in what links are included. To examine this possibility, we have examined 500 domainlevel ideology scores from Bakshy, Messing, and Adamic (2015) and assess the overlap between that work and our set. Using a logistic regression model to assess whether a domain's audience ideology scores predict its inclusion in our dataset, we find no statistically significant relationship between the two factors. For less popular phenomena, however, this question of bias remains open, but a fundamental tension exists between these rare instances and the differential privacy protections, as these protections add more noise to these rare instances to prevent identification. How these two factors interact needs to be a subject of future work.
Third, we stress our ideology metrics measure the audience of a domain/link, not the actual content of the domain or link itself. While much of the related work in this area makes similar assessments (e.g., Bakshy, Messing, and Adamic (2015), Robertson et al. ( 2018)), it is important to note that content in some of these sites may not be overtly partisan but are more attractive or known to partisan audiences. As noted in Gentzkow and Shapiro (2010), the ideological slant of a media outlet's audience does affect choices of what that outlet covers and how it does so, but this distinction between content and audience is important for interpreting this work.
---
Ethics and Competing Interests
This work's intent is to provide a broader audience with an example for working with social media and digital trace data that has been protected with differential privacy techniques. Though this work is focused on audience ideology, the methods are equally applicable to aggregations across other demographic bins or activities. Similarly, our focus on ideology results in a US-oriented analysis, as the Condor dataset only provides ideology-relevant PPA assessments for US users. While a clear limitation of this work, it does hint at the need for broader perspectives on how such left/right scales can be generalized to other national contexts, as discussed in Lo, Proksch, and Gschwend (2014). Ultimately though, teams internal to Facebook would have to extend the Condor dataset to include page-affinity scores for non-US audiences.
Regarding ethics in research, this work and the Condor dataset more generally has some considerations worth noting. Condor's privacy protections provide value in preventing identification of individual users' actions on the platform but at the cost of obfuscating rare phenomena. Vulnerable and minority groups who might be over-represented in these rare instances are potentially disproportionately impacted by these protections, as researchers balance preserving privacy with studying how behaviors on the platform may impact these groups. More work is needed to assess how platforms like Facebook interact with these populations and how we might study these interactions while still providing a reasonable level of protection for these users.
While we claim no conflicts of interest, for transparency, we note that one of the authors of this work has received funding from Facebook related to the Social Science One initiative. This funding was not for this work, and while Facebook has had the opportunity to review this work prior to publication as part of the Social Science One agreement, they do not have authority to prevent publication. Finally, this work was reviewed by university internal review boards as a prerequisite for gaining access to the Condor dataset.
---
Conclusions
Through the above assessments of our proposed audienceideology measure, based on a simple weighted average of online behavior across ideologically grouped audiences, this paper presents three core contributions: First, this measure and its assessment provide guidance for researchers seeking to use differential privacy-protected digital trace data in analyses of online political behaviors, which we make more compelling by demonstrating agreement with other published measures that do not have these protections. Second, we extend this work on domain-level analyses to demonstrate how our proposed metric can provide insights at the individual link level, which is often made difficult by concerns of sparsity in other datasets. Third, we contribute to studies of media slant and online political engagement by extending this analysis to other online types of online activity beyond just sharing -i.e., views and likes. As Condor is the largest dataset of its kind and the primary mode of access to Facebook data for researchers unaffiliated with Facebook, the endogenous metric for audience ideology we providealong with the related insights for SS1 researchers looking to leverage this unique dataset -may accelerate research in this space.
---
Data Availability
Access to the Social Science One dataset used in this analysis is governed by the Research Data Agreement made available as a joint effort between Facebook and the Social Science One Consortium: https://socialscience.one/researchdata-agreement. |
Background: Latin American countries have been profoundly affected by COVID-19. Due to the alarming incidence of identified cases, we intended to explore which psychosocial elements may influence poor adherence to the mandatory control measures among the population. Objective: We aimed to assess Peruvians' knowledge, attitudes, and vulnerability perception during the coronavirus outbreak. Method: We collected data from 225 self-selected participants using a webbased cross-sectional survey. Results: The overall respondents were between 18 and 29 years old (56.8%), female (59.5%), belonged to educated groups, and graduated professionals (69.3%), most of them. Logistic regression showed that Knowledge is highly associated with education (p = 0.031), occupation (p = 0.002), and age (p = 0.016). Our study identified that, although people reported adequate Knowledge by identifying expected symptoms and virus transmission ways in COVID-19 disease. There is a significant perceived susceptibility to contracting the mentioned virus, displaying stigmatized behavior (59.1%) and fear of contracting the virus from others (70.2%). Additionally, it is reported to lack people's confidence in national health authorities regarding sanitary responses (62.7%), preparedness for the disease (76.9%), and the lack of adequate measures to deal with it (51.1%). Conclusion: We found that age, education, and occupation modulate Knowledge. At the same time, only age affected Perception and Attitude. Public policies should consider specific guidelines on knowledge translation and risk communication strategies for both containing psychological responses promptly and ensuring compliance with general control measures by the population. | Introduction
In December 2019, a new viral infection emerged in Wuhan, China [1], named novel coronavirus disease (COVID-19) by the World Health Organization [2]. The unknown nature of the virus has led to alarming death rates in many countries worldwide, putting strain on health systems [2,3]. Studies comparing COVID-19 to previous epidemics like SARS or MERS reveal that the virus has a much broader dispersal capacity [4], indicating a higher potential risk and potentially surpassing infection and death rates previously reported [2,4]. According to the World Health Organization, the number of confirmed cases worldwide is around 1.7 million [5]. COVID-19 has rapidly spread across geographical boundaries, prompting various countries to implement public health protocols to control its spread. Social distancing, hand washing, and city lockdowns have been implemented. This critical situation has elicited various reactions among the population, causing anxiety and fear, particularly among those unaffected by the virus [6].
In Latin America, COVID-19 represents an unprecedented challenge as similar viruses like SARS and MERS have not been experienced in the region before. Many Latin American countries find their public healthcare systems unprepared to handle the epidemic. In Peru, the virus's rapid spread, even among mildly symptomatic or asymptomatic individuals, highlights the need to understand the population's behavioral responses to the situation.
Limited studies on knowledge and attitudes during epidemics exist in South America. Earlier studies in the region suggest that the population tends to be hesitant in adopting control measures during outbreaks of diseases like chikungunya, zika, and dengue [7,8]. Non-compliance with government measures during these outbreaks was possibly due to the limited impact on some geographical regions with favorable climatic conditions for those mosquito-borne diseases [8][9][10].
In response to COVID-19, countries have imposed strict control measures to prevent mortality rates from escalating. After confirming its first case on 6 March 2020, Peru implemented strategies like social distancing, continuous hygiene practices, and limiting public movement and access to non-essential places [11]. However, despite the mandatory nature of these protective measures, adherence among the population needs to improve, signaling an alarming lack of commitment among certain groups [6,10,12,13].
Studies analyzing attitudes and knowledge about COVID-19 in Hubei, China, show that attitudes towards government containment measures are closely associated with the level of knowledge about the virus [12]. Individuals with higher information and education levels tend to have more positive attitudes toward preventive practices [6,12]. The perception of risk plays a significant role in the commitment to preventive behaviors during global epidemics [6,10,[14][15][16][17].
Perception of risk may be influenced by the type of information individuals have. Lack of information or misinformation can be a barrier, increasing the likelihood of infection [14]. However, people's judgments are often based on their perception of risk rather than actual risk [16]. During the SARS epidemic, psychological responses generated massive distress, leading to "disproportionate" reactions in the population [18].
Experts in Australia found that poor public communication policies during the H1N1 influenza epidemic contributed to mass panic in the population and non-compliance with containment measures [19]. Individual attitudes toward public policies significantly influence the effectiveness of containment measures [6,19]. Despite government efforts, the passive Attitude towards implemented policies continues to impact the population's health and that of their close relatives.
The lack of knowledge about COVID-19 may mediate increased virus infection rates. Similar cases, like the Ebola virus outbreak, showed that a poor understanding of the disease and its transmission contributed to higher case rates [20]. Knowledge of infection processes and precautions can influence citizens' adherence to government guidelines.
Systematic reviews stress the importance of educating affected populations to increase their understanding of the disease cycle and facilitate the adoption of preventive measures [10]. However, studies in developed countries like Singapore indicate that citizens may require less information to comply with government measures, suggesting high trust in their leaders [21]. It is crucial to consider potential biases in these studies, as they mainly assess individuals with higher education levels during the epidemic.
Given the lack of previous studies on outbreaks, knowledge, or risk perception in Peru, our survey aims to assess the population's level of knowledge regarding COVID-19, its symptoms, transmission, and severity. Additionally, we aim to evaluate the perceived risk and seriousness among the Peruvian population and their behaviors in response to the disease.
---
Materials and Methods
---
Participants
This work is a descriptive, cross-sectional study through a web-based survey [22] conducted between 15 March and 3 April 2020. An initial sample of 225 Peruvian individuals was explicitly recruited in the initial period of the lockdown. The mean age was 31.20 ± 10.97, ranging from 17 to 77 years old, and 59.6% were females. The survey questions were adapted and modified from previously published literature regarding viral epidemics [13,15,21,[23][24][25][26][27][28], most related to SARS or MERS disease. The test respondents commented that the questions were easily understood, and the average completion time was 10 min. Informed consent was obtained before starting the survey. Respondents were assured that their responses would be confidential and reminded that their participation in the survey was voluntary. Their Knowledge was evaluated against facts published by WHO [29]. This study was conducted using a convenience sampling of the general Peruvian population with internet access. To calculate the sample, we use a statistical tool, G*Power, using an effect size of 0.15, α error probability of 0.01, power (1-β error probability of 0.999 and 4 predictors for regression analysis as the statistical test used [30].
2.2. Instrument: Knowledge, Perception, and Response Questionnaire against COVID-19 Subjects responded to 6 sections of the questionnaire: Knowledge about coronavirus (COVID-19) infection, transmission, perception of disease severity, perceived susceptibility, prevention attitudes, and behavioral response to COVID-19 infection. The sequence in which tests were administered was identical for all subjects. This test was previously described in Zegarra-Valdivia et al. [31]. The survey questions were adapted and modified from previously published literature of similar questionaries under the Ebola, Zika, or A(H1N1) epidemic [7,8,10,13]. A group of trained psychologists systematically analyzed various surveys addressing similar themes such as knowledge, attitudes, and perceptions across diverse epidemic scenarios. During the pilot study, 20 respondents who had participated in the online survey were interviewed. The test respondents found the questions to be easily comprehensible, and the average time taken for completion ranged from 10 to 15 min. Participants were guaranteed the confidentiality of their responses and reminded that their involvement in the interviews was voluntary.
In the knowledge assessment section of the questionnaire, a score of 1 was given for each correctly identified symptom of COVID-19. The subsequent knowledge questions (14 items) were posed in which the answers were Yes, No, or Don't Know. In the transmission section (10 items), a similar scoring was given for each correctly identified transmission mode of COVID-19. A score of 1 was assigned to a correct answer and a value of 0 to an incorrect answer or do not know the response. In the section about the perception of disease severity, participants indicated the seriousness of COVID-19 in their community context and concerning other viral infections, such as influenza. A three-point Likert-type scale (agree, not sure/maybe, and disagree).
Questions on perception were divided into five parts. The first part explored perceived susceptibility towards COVID-19 (six items), in which participants indicated their level of exposure by either Yes, No, or Don't Know. A score of 1 was assigned to a correct answer and a value of 0 to an incorrect answer or do not know the response. The second part examined COVID-19-related fear (four items), with answers like the previous one.
The third and fourth parts, the susceptibility of getting contagious and contagious places, have 10 and 5 items, respectively. Participants select one of 3 possible answers (very likely, probable, and unlikely). The last part has four items (high, middle, low) and measures the probability of different things related to COVID-19. In the section about the prevention attitude (21 items), participants indicated which behavior is more likely to prevent COVID-19. A three-point Likert-type scale (agree, not sure/maybe, and disagree). Finally, behavioral response to COVID-19 infection explores the attitudes and perceptions about quarantine (3 and 6 items, respectively), in which the answers were either Yes, No, or Do Not Know. Each section has a total score. In the case of knowledge sections, a higher score indicates better Knowledge. A higher score in the perception and behavior score indicates increased vulnerability perception.
Knowledge items have a high score of 24, Attitude has a high score of 31, and perception has a high score of 38; The total score is 93. This study's maximum total score was 76 (56.88 ± 7.32). Regarding internal consistency, previous research shows a Cronbach's Alpha of 0.839 (0.82-0.857 IC 95%) on this instrument [31]. Nonetheless, we analyzed the internal consistency of the instrument. The total Alpha of the Cronbach was 0.811 with a range of sub-scales between (0.311-0.794). Sub-scale with reduced consistency was related to the perception of disease severity.
---
Ethical Statement
All participants were informed about the aims of this study and gave written informed consent. This study followed ethics guidelines and was approved by the local ethics committee (CEI number 003-2020). All data were collected in an anonymous database.
---
Data Analysis
The socio-demographic characteristics of the participants included in the study sample were compared with χ 2 tests. The χ 2 test compared the percentages of answers. The effect of age, gender, marital status, occupation, and education was assessed with a linear regression analysis using the total punctuation of the six previous sections. Statistical analysis was performed through the SPSS software version 24 (SPSS, Inc., Armonk, NY, USA). Results were significant with * p < 0.05 and ** p < 0.01.
---
Results
---
Background Characteristics (Table 1)
The study sample included 225 subjects. Most of the study sample was female (n = 134), and it is found a statistically significant difference between age groups by gender (p < 0.001 **). From the females, six adolescents (17 years old) were considered in the group <18 and included in the analysis regarding the age close to the age of majority and independence, a situation usually seen in Peru where adolescents do not live with their parents under different conditions. More than half of the respondents were between 18 to 29 years old (56.8%). 69.3% of the sample are graduates, single (70.2%), professional (Workers with a university degree), and independent workers (Technical jobs and trades), displaying a similar percent distribution between males and females.
---
Knowledge about Symptoms and Transmission Ways of COVID-19 Disease (Table 2)
The sample does not discriminate between the most frequent symptoms of the disease and includes other manifestations. Thus, more than half of the study sample correctly identified the most frequent symptoms like fever (94.7%), fatigue (62.2%), and dry cough (88.9%) along with others as just as sore throat (81.8%), joint and muscle pain (56.9%). A certain consensus is also observed among the subjects in recognizing as a manifestation of the disease the shortness of breath/shortness of breath (92%). However, this has not been confirmed as part of the diagnosis [30]. Diarrhea (64.9%), runny nose (60.9%), and nasal congestion (66.2%) were not recognized as part of the disease despite being more frequent than other symptoms, such as shortness of breath/shortness of breath. Most of the population (86.2%) knew the incubation period.
In the same way, the situations considered means of transmission/spread of COVID-19 include, in order of importance, Touching objects or surfaces that have been in contact with someone who has the virus (92%), going to areas/countries affected by COVID-19 (88.4%), shake hands with someone who has an active case of coronavirus (84.4%) like the most important. Also, subjects identified situations unrelated to contagion: participating in blood transfusions (59.1%) and relating to people in a hospital or emergency room (53.8%).
---
The Severity of COVID-19 and Prevention Measures (Table 3)
Regarding the severity of the disease, 91.6% consider COVID-19 as highly contagious, with symptoms like flu and influenza (84.4%). On the other hand, when evaluating the mortality ratio, they do not assess that it is worse than influenza or tuberculosis (76.4%) or causes permanent physical damage to patients (75.1%). However, when comparing the impact of COVID-19 with influenza or the common cold, more than half of the interviewees indicated that the coronavirus would cause a more significant effect (76%). The results also revealed insufficient confidence in the national or local authorities (62.7%), preparedness for the disease (76.9%), and the lack of adequate measures to deal with it (51.1%).
The results evidence an inappropriate understanding of the precautionary measures. At the same time, hand washing has been recognized as the most efficient form of prevention among respondents (98.2%), followed by personal hygiene (97.3%). Conversely, other vital measures were not considered, such as daily temperature control (57.8%) and the use of a mask (59.1%), even though the WHO recommends its use in healthy subjects in combination with frequent hand cleaning [32]. Furthermore, antibiotics are not recognized as the first line of action against the disease (75.1%), a sign of the population's Knowledge of the treatment.
---
Perceived Susceptibility to COVID-19 (Table 4)
On the other hand, around 59.1% consider that there is a stigma about COVID-19; 72.4% respond to preventive measures to avoid the disease, and 45.8% value that the problems derived from the pandemic will not pass quickly compared to the 35.6% who do not know about it.
One of the greatest fears among the evaluated population is being in contact with people who have returned from abroad (70.2%), followed by eating out (64%), visiting hospitals (63.1%), and having contact with people with flu symptoms (59.6%). Concern for the family is evident (71.6%), considering that one of the groups most susceptible to contagion is the people over 60 years of age (70.2%) in addition to health services personnel (74.7%). Children are considered in the last place of the possible infected subjects (56.4%).
Health institutions (45.8%) and domestic settings (68.4%) are considered places of infectiousness; in addition, the effectiveness of treatments (57.3%) and the effectiveness of available medication or remedies against the disease (75.6%) pose a high-risk vulnerability. 5 and6)
Finally, a multivariate analysis is used to analyze the weight of each proposed variable in the total score. This result shows that Knowledge has a slight but significant correlation with education (p < 0.031 *), occupation (p < 0.002 *), and age (p < 0.016 *) and explains less than 10% of the variance. In the case of perception, occupation (p < 0.034 *) has a slightly significant relationship but explains less than 5.2% of the conflict. The remaining variables do not have substantial results (Table 5). Besides, we analyze socio-demographics' impact (age, sex, marital status, education, and occupation) on the variables studied (Knowledge, Transmission, Severity, Perception, Prevention, and Attitude). We found that age (p < 0.001) was a critical positive covariable in Severity, Perception, and Prevention, with a decreased effect on Knowledge. Similarly, for Knowledge, education (p = 0.031) and Occupation (p < 0.01) show an effect. Finally, the scores reached by the sample on knowledge of COVID-19 was 22.40 ± 3.131, where 104 subjects (46%) obtained low knowledge, 72 (32%) medium knowledge, and 49 (21.8%) high knowledge. Regarding perception vulnerability, the medium score was 18.95 ± 4.86, where 56 subjects (24.9%) had low vulnerability perception, 96 (42.7%) had medium vulnerability perception, and 73 (32.4%) had high vulnerability perception. In the case of Attitudes against COVID-19, the medium score reached by the sample was 22.04 ± 3.02, where 83 (36.9%) had low attitudes against COVID-19, 67 (29.8%) had medium levels, and 75 (33.3%) has high levels of Attitude against COVID-19.
---
Discussion
Considering the spread of COVID-19 in Latin American countries and the higher incidence of people infected in Peru, this study aimed to measure the level of Knowledge, perceived vulnerability, and Attitude of the Peruvian population against COVID-19. However, different public health policies and the mandatory nature of these protective measures were implemented in the last months. The adherence of Peruvians to each of them was limited. Previous reports of psychological adherence to protective standards display that level of information and education are related to a positive attitude toward COVID-19 preventive practices [12].
COVID-19 has a higher rate of contagious properties than previous coronaviruses and affects multiple organs. The absence of awareness of hospital infection control and international air travel facilitated rapid global dissemination [33]. In addition, psycho-logical elements such as fear-induced behavior, misinformation, and economic-related concerns would exert significant pressure on the population, limiting compliance with these government measures [34].
At the time this paper was sent for publication, the Peruvian Ministry of Health had reported more than 9.7 thousand cases by COVID-19 infected patients since the first case was reported on March 6, and the total number of deaths is the second highest in Latin America, with 216 [34]. Nonetheless, the behavioral response of Peruvians was not sufficient. Behavioral responses, such as intense fear of infection or coronaphobia [35], are among the most significant indicators in the evaluated sampling. The findings identify that as long as there is Knowledge about dealing with the epidemic, the degree of susceptibility to infection is lower. As described in a study in Pakistan, the failure to follow precautionary measures against pathogens is explained by insufficient Knowledge [14].
Given the impact of such magnitude in Latin American contexts, further analysis is suggested to establish better response and epidemic control strategies from the standpoint of the population. Understanding people's risk perception is critical to ensure efficient health protection practices during virus outbreaks [10].
Regarding Knowledge, perhaps some symptoms are recognized as COVID-19-related (fever, sore throat, shortness of breath), but our participants do not discriminate correctly. Other significant symptoms, such as nasal congestion, runny nose, dry cough, or diarrhea, are usually more frequent in initial states. The incubation period is well recognized in 86% of the population. We found that knowledge was associated with education, occupation, and age, which indicates that people with a higher educational level tend to have greater access to information sources and educational resources, allowing them better to understand scientific and medical materials on the disease. In addition, it highlights that education and occupation may be related to a greater willingness to learn and adapt to new situations, such as the need to understand and act against COVID-19. Age was also associated with knowledge, showing that older people, due to their experience and recollection of past events, may be more aware of infectious disease risks and, therefore, more interested in learning about COVID-19 to protect themselves and their loved ones.
Routes of transmission of COVID-19 are well recognized (viral droplets in a sneeze, touching infected objects, shaking hands with people infected, etc.). Nevertheless, other medical circumstances were identified, relating increased perception vulnerability to a specific context and medical conditions (for example, 35% believe that COVID-19 spread is related to people in a hospital or emergency room). This affirmation can promote stigma among sanitary personnel. 5-25% of participants are unaware of or recognize transmission routes. Additionally, more than 20% do not realize that being in touch with people identified by doctors is a potential vector of transmission.
Perception of COVID-19 severity in the community showed that 76.9% believe that the authorities are unprepared to face the disease, and 62.7% think that the authorities' response is ineffective. This result may be related to less participation in dictated measures by the government, such as social isolation and gender segregation. Different preventive measures are well recognized by participants, such as personal hygiene, washing hands, or a clean environment. Notwithstanding, other practical efforts, such as using a mask (40.9%) or monitoring temperature (42.4%), are not considered.
Regarding perceived susceptibility, 72.4% believe that "Nothing I do can stop the risk of catching me." This vulnerable state may be related to poor participation in ineffective measures to avoid contagious, like social distancing or mask faces. On the other hand, 74.2% believe that "If I contracted the coronavirus (COVID-19), it would have serious consequences for me or my relatives". Despite different epidemiological studies pointing out that mortality is lower than 5% [36,37] even in Peru, current data indicates a mortality rate of over 10%, and recovery is one of the highest in Latin America [34]. Participants evidence an elevated fear of being in contact with others (59-70%), in correspondence of personal susceptibility of getting the infection (over 60%) and a high likelihood of having a significant outbreak of coronavirus (COVID-19) from person to person in my community (71.6%).
Regarding Multivariate analysis, it shows that educational level and occupation have an impact on Knowledge. Besides, age was the most critical covariable affecting most variables (Knowledge, Transmission, Severity, Perception, and Prevention). In this way, it is shown that age and probably the emotional maturity reached have been better mediators of these variables.
Finally, we concluded that insufficient understanding of COVID-19 seems to mediate unsafe behaviors, affecting effective prevention measures and the failure to reduce the rate of people infected. Moreover, the perception of vulnerability is high towards certain risky behaviors regardless of other possible transmission routes.
---
Limitations
This study has some constraints. First, causal inferences may not be established since the methodology is derived from a cross-sectional design.
Second, it is related to the sample. Because the study was only focused on the outbreak of COVID-19, we used a web-based survey method to avoid possible transmission, causing the sampling of our research to be voluntary and conducted by an online system. Given this circumstance, the possibility of selection bias must be considered.
Additionally, much of the sample have access to the internet connection in their computers or cellphones. Because of this, participants may have higher income or better educational access (more than 85% have graduate and postgraduate studies). Also, in the absence of low-income people, less education is needed to know their responses to the COVID-19 pandemic. The sample size is another limitation, and the current wave of misinformation in social media would affect poorer responses [16]. Third, due to the sudden disaster, we could not assess other socio-psychological conditions of the participants before the outbreak.
---
Recommendations
Due to fearful attitudes and the significant impact on population mental health towards the pandemic and new demands for surveillance and control of current COVID-19 outbreaks. Some previous studies identified appropriate suggestions to facilitate compliance with control measures by the population [14,15] and increase knowledge [27], especially enfaces in psychological coping [38,39]. Some of these are described below: First, educational intervention should be tailored to vulnerable communities, including teaching preventive measures and practical identification of risks in non-technical language [40]. The population must be educated to choose wisely regarding reliable news, such as facts and evidence-based data [41]. On the other hand, it is important to consider the knowledge and attitudes toward possible treatments since some studies conclude that the fear of becoming infected with COVID-19 helps the intention to get vaccinated. However, conversely, conspiracy theories about vaccines arise about their effectiveness. Hence, disseminating knowledge that is easy to understand to the population is essential for better reception and greater acceptance of the vaccine [42,43]. Likewise, the population must be educated about the post-COVID-19 syndrome and the possible consequences of the infection. For this, it is important to follow up on patients. However, there are few studies regarding it [44]. Second, consideration should be given to guiding the population on protecting their mental health by limiting the time they are exposed to information related to COVID-19 during the day [45,46], as well as the need to implement preventive actions in the general population to reduce the prevalence of depressive, anxious and fearful symptoms related to COVID-19 [47].
Third, It is crucial to encourage people to return to their usual work and rest schedule as much as possible to mitigate anguish and fear and ensure sleep quality before going to sleep [18,40].
Besides these recommendations, we believe that clear communication between the government and the Ministry of Health with the population is crucial, with relevant preventive-based-evidence programs. Mental health services and health education could be implemented in the communities, including by-phone therapy and emotional care.
---
Data Availability Statement: Not applicable.
---
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
---
Conflicts of Interest:
The authors declare that the research was conducted without any commercial or financial relationships that could be construed as a potential conflict of interest. |
Digital literacy has been included in the set of the eight key competences, which are necessary to enjoy life to the full in the twenty-first century. According to the previous studies, women tend to possess lower digital competence than men; the older the person, the lower the level of digital literacy. To date, Polish citizens in general have worse skills than the European average. This may lead to people being socially excluded and vulnerable to cybersecurity threats, especially in the times of the COVID-19 pandemic, which requires them to work, study and shop using the Internet. The study concerned Polish women who work at universities, as scientists and teachers. Their perceived level of their digital literacy has been studied in the broad campaign, along with their awareness of the cybersecurity matters. Then, the collected results were processed with an association rules mining algorithm, uncovering the factors related to the shifts in them. | Introduction
The development of the Internet and information technologies has changed people's lives in innumerable ways. Citizens are able to work more efficiently, purchase goods from the comfort of their homes, access any kind of information and knowledge, communicate instantly with people all around the globe, and so on. Thanks to the technology, they are able to live healthier, longer, safer and happier lives. In today's modern world, one needs to know how to utilize technologies to their advantage; it has become the requirement for the active participation in society. In other words, digital literacy has become a life skill, or a key competence (Telecentre Europe 2014).
The COVID-19 pandemic that we are yet to see the end of has transformed virtually every aspect of people's lives. In some fields the shifts have been subtle, some others have been changed drastically and people may still be in the process of adapting to the new reality. One of the major changed consisted in transferring business, schooling and commerce to the on-line domain, to an unprecedented scale. Hardly anybody was prepared for this change; the results of it were difficult to predict, too. After a number of months passed, the time came for some initial reflections upon the consequences.
Thus, the multidisciplinary research team has decided to conduct a series of studies and scientifically check whether the pandemic's side effect has indeed been the increase in the level of digital literacy (as suggested by the media, e.g., in Jeleński (2020)), and if the malicious activity of hackers during the crisis have made people be more aware of the dangers and threats that using the cyberspace may pose. Additionally, the factors influencing both aspects were to be uncovered. The rationale behind the selected topic, study group and methods will be presented in the further part of this paper.
The study included two major steps. Firstly, the team has conducted a broad campaign (the first of the kind) involving as many as 380 women working at Polish universities in order to construct a dataset. Then, an association rule mining experiment was conducted, revealing the actual relationships between the studied items, in order to answer the research questions stated in Sect. 2. The remainder of this paper is structured as follows: The remainder of Sect. 2 provides the rationale and context for the study, whilst Sect. 3-the theoretical background for the study. The materials, methods and the course of the study are presented in detail in Sect. 4. Then, the results of the study have been outlined, followed by the Discussion of the results and threats to validity in Sect. 5. Finally, the closing remarks are presented in Sect. refs6.
2 The research questions, and the rationale and the context for the study
---
Research questions
The study aimed at answering four research questions. They will be presented in this section, whilst the rationale and the motives will be explained thereafter.
---
Research question 1
Does the compulsory remote work make the perceived level of one's digital competence increase? The assumed answer: yes, it does. In order to build up a more comprehensive picture of the matter, the research team wished to know if the workplace/employer of the respondent provided any kind of training or support for the workers doing remote job, and if its presence or lack thereof influences the perceived level of one's digital literacy.
---
Research question 2
Does the compulsory remote work make the perceived level of one's cybersecurity awareness increase? The assumed answer: yes, it does. Additionally, the team wished to see whether employers help remote workers gain knowledge of the cyberspace threats and the ways of preventing them.
---
Research question 3
Is the level of perceived digital literacy related to the age of the respondent? The assumed answer: yes, it does. Based on the quoted study by EUROSTAT, the younger the person, the higher their level of digital literacy. The study aimed at checking whether younger people would assess their skills in a significantly better way than the older ones, or if the oldest respondents would assess their skills lower than the younger ones.
---
Research question 4
Does a higher level of cybersecurity awareness relate to feeling safer when working online, or vice versa? Does training at one's workplace contribute to the feeling of security?
The research questions have been summed up in Table 1.
---
Current context for digital literacy of women in the Republic of Poland
Despite digital skills being deemed a necessity in the twentyfirst century, according to the most recent study by Eurostat, merely 58% of Europeans aged 16-74 possess basic or above basic digital skills. There is the difference to be noticed between genders; 60% of males possess the skills, whilst the percentage of women is 4% lower. In Poland, while the overall percentage of the people having basic or above basic digital skills (44%, 22 nd place amongst EU28 countries) may not be drastically lower than the European average, there again exists the difference between genders, and only 43% women aged 16-74 are digitally literate.
In addition to this, it must be noted that another study has shown that the individuals possessing digital skills are mostly the very young ones, aged 16-19 (European average: 83%, Poland: 84%). Although the study has encompassed only the people aged up to 29 years old, it can be clearly visible that the older the individuals, the fewer of them possess the digital skills -for the group aged 20-24 it was 81% for Europe and 76% for Poland, whilst in the group aged 25-29, it was 78% and 69%, respectively (EUROSTAT, 2020). Taking the above-mentioned into consideration, it may thus be concluded that the group which may possess the lowest digital skills are the middle aged and elderly women.
In the light of the fact that digital literacy is considered one of the life skills, the lack thereof may make one's enjoying civic rights harder, or even lead to social exclusion (Soomro et al. 2020).
---
Current context of the Covid-19 Pandemic
When WHO first declared the outbreak of the COVID-19 disease a Public Health Emergency of International Concern in January 2020, and then, subsequently a pandemic in March 2020, no person probably imagined how life was going to change in the upcoming weeks and months.
In the struggle to prevent the disease from spreading, many drastic measures have been taken, including lockdowns and compulsory social distancing. This meant that schools, businesses and countless other organizations had to start working remotely, utilizing cyberspace and digital tools. Millions of people were forced to turn to the online mode overnight, no matter if they had the skills to do it or not, and no matter if they even had a computer or access to the Internet (Pawlicka et al. 2021b). For many, this meant they suddenly lost the ability to perform their professional duties, or their access to education was denied. After several months have passed, and governments, companies and individuals have tried to get a grip on the new, hard reality, it has been suggested that the pandemic might have made people more digitally literate (Jeleński 2020).
However, the tense, difficult situation attracted many wrongdoers and criminals, who have abused the fact that so many people were bound to use the Internet for work, learning, training, communication, purchasing necessities, etc.; that they have become utterly dependent on the Internet (Fidler 2020). Along with the massive increase in the number of videoconferences, the popularity of online shopping, banking, and so on, the amount of malicious software, phishing e-mails and ransomware attacks, along with the staggering amount of COVID-19-related fake news has been disturbingly rising, with the occurrence of some types of attack increasing fivefold since the beginning of the pandemic (Rementeria 2020)(WHO 2020). Again, the people with the lowest levels of digital literacy, who already might have been experiencing some forms of exclusion, have become the most vulnerable group and their security and privacy may be compromised, by various, malicious cyberspace actors (Gerg 2020;Pawlicka et al. 2020bPawlicka et al. , a, 2021a)).
---
Legal background
Currently, there is still an ongoing debate (also at the United Nations level) tending towards stating that the same human rights that apply offline should also be protected online, and that access to the Internet should be considered a human right, in particular in the context of the freedom of expression covered under article 19 of the Universal Declaration of Human Rights (Article19, 2018). One of the first documents in which the access to the Internet was considered as a human right was the Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Human Rights Council (Rue 2011).
Later in 2016, the Human Rights Council published the resolution on the promotion, protection and enjoyment of human rights on the Internet (UN General Assembly Human Rights Council, 2016). In this document, the importance of empowering all women and girls by enhancing their access to information and communications technology, promoting digital literacy and the participation of women and girls in education and training on information and communications technology, and encouraging women and girls to embark on careers in the sciences and information and communications technology was directly emphasized. The article 5 of the Resolution suggested that the states should make efforts to bridge the gaps of digital divides, including those of gender.
The problem of gender in this respect was already expressed in 2017 by the United Nations High Commissioner for Human Rights in the General Assembly Report on Promotion, protection and enjoyment of human rights on the Internet: ways to bridge the gender digital divide from a human rights perspective (UN General Assembly Human Rights Council, 2016). In the report, it was stated that lower digital skills and lower digital literacy can directly influence lower position of women in the labour market, and in leadership positions.
In 2018, the Human Rights Council approved yet another document concerning the rights to the Internet access, namely the resolution on the promotion, protection and enjoyment of human rights on the Internet at the 38 th Session of the Human Rights Council in Geneva. In this document, the emphasis is also put on the need to address the genderbased digital divides. Point 5 of the resolution encourages all the countries to put efforts into bridging digital divides, including the gender digital divide. The resolution also calls for enabling online environment that is safe for all, and facilitates affordable and inclusive education.
---
Theoretical background
---
Digital competences as a key factor
In order to construct and design the research study and tool, the concept of digital skills had to be defined.
For the sake of this study, the notions of digital skills, digital competences and digital literacy have been used interchangeably. When building the tool, the definitions by the European Parliament and the Council has been assumed. They have come up with a set of eight Key competencies for Lifelong learning. The competencies have been described as a mixture of knowledge, skills and attitudes that every person requires to derive personal fulfilment, develop as a person and be an active, upright citizen. They are also necessary for employment and prevent being socially excluded. The eight key competences comprise:
1. Communication in the mother tongue 2. Communication in foreign languages 3. Mathematical competence and basic competences in science and technology 4. Digital competence 5. Learning to learn 6. Social and civic competences 7. Sense of initiative and entrepreneurship 8. Cultural awareness and expression.
European Parliament and the Council have recommended the Member States provided these competencies in all their strategies of lifelong learning, as a way of preparing young people for adulthood and being a basis for further learning and working life, and updating and developing the competencies of all the adults. It has also been recommended to make adequate provision for the citizens who require particular assistance in order to realize their potential, whether it be owing to personal, cultural, social or economyrelated circumstances (Eur-Lex 2006).
The digital competence is defined as involving the confident and critical use of Information Society Technology (IST) for work, leisure and communication. According to the recommendations, possessing digital competence means the individuals can understand the nature, role and opportunities of digital environment in their daily lives; both in the professional and personal contexts (Eur-Lex 2006). Digital competence is a broad and vague term; therefore, the skills it encompasses have been organized into a clear, conceptual framework-Digital Competence Framework for Citizens, called DigComp. Pursuant to the updated version of the model (known as DigComp 2.0), the skills that digital literacy comprises fall into five categories; altogether there are 21 particular skills in this model. They have been shown in Table 2.
---
State-of-the-art of digital literacy studies
The literature sources present a number of studies related to digital literacy and competences in the context of universities or higher education. In their research, (Shopova 2014) scrutinised the levels of digital literacy amongst students. Although digital competences were deemed crucial for enhancing the learning process, the majority of young people who took part in the experiment lacked many of the much needed skills, such The only work which touched upon the digital literacy of teachers in the pandemic was the paper by Santi Susanti Rachmaniar (2020). It checked if and how the digital competence levels shifted in the teachers of an elementary school since the onset of the COVID-19 pandemic. Indeed, some of the studied people reported that they gained new digital skills whilst preparing their lessons for students. Still, a number of teachers still had difficulty in this domain and had to ask the students' parents for assistance.
However, to date, to the authors' best knowledge, no study of the level of digital literacy of university teachers/ educators amidst the pandemic has been conducted.
---
Data mining
Data mining, sometimes referred to as KDD (knowledge discovery in databases) consists in extracting unrevealed information from substantially vast datasets. In order to extract patterns and find order in the historical data, intelligent methods are often applied (Bhargava and Selwal 2013). Association rule mining is a kind of data mining. It is utilised in order to discover association relationships between the items belonging to big datasets. The basic association rule is X → Y; it means that if X is true of an instance in a dataset, then Y is true of it, too. In this relation, X is called an antecedent and Y-the consequent. Antecedents are understood as the items that appear first and all the consequents as the ones that follow them. The level of significance of the association is measured using three indicators-support, confidence and lift. Support explains the level of popularity of a given combination of items (i.e., an itemset) within a dataset. It is the proportion of the number of occurrences of the itemset of X and Y to the total number of items in it. It is expressed by the Formula 1: (Agrawal et al. 1993).
Confidence is the measure of how likely the item Y is to appear together with the item X; it is expressed by the proportion of the number of instances when X and Y appear together vs. the number of times X appears. The formula of confidence is:
(1) Support = freq(X, Y) N (Agrawal et al. 1993) As this measure may misrepresent the significance of an association (especially, if item Y is popular as well), the third measure is applied, namely the lift. Lift says how likely the item Y is to appear together with item X, taking into account the popularity of the item Y as well. Lift is expressed by the formula: (Gokul 2020) In other words, lift is the measure which assesses the strength of the association (Gokul 2020). If the lift is higher than 1, it means that the occurrence of X does lead to Y. The higher the lift, the stronger the association. A lift value close to 1 shows that one item does not affect the other. Consequently, if the lift is lower than 1, then the occurrence of X has a negative effect on the occurrence of the item Y (IBM Knowledge Center, 2021). For the sake of this particular study, it has been decided apply the apriori algorithm. The algorithm is one of the most popular ones used for association rule mining. It was first introduced by Agrawal (1994) and described in detail by Bhargava and Selwal (2013). Apriori assumes that all subsets of a frequent itemset must be frequent; conversely, if an itemset is not frequent, then its supersets will be infrequent, too. It is worth noting that the thresholds/ levels of frequency are decided upon by the person using the algorithm, based on experience, experiment, expert advice, needs, etc. The apriori algorithm has been often used for the so-called market basket analysis, that is analysisng transactions in search for the items frequently bought together. It allows one to find interesting, often surprising associations within large datasets. It is also said to be user-friendly and easy to use; however, it may require a lot of resources and computation time if applied on a large dataset and with minim measures' thresholds kept very low. The particular algorithm was chosen for this experiment in hope it would yield interesting results, help confirm or reject the research assumptions and answer the following research questions.
---
Materials and methods
---
Part 1: The dataset and research group
It was decided that the first study of the series will be conducted on women who are university teachers/ educators and/or scientists by profession.
(2) Confidence = freq(X, Y) freq(X)
(3) Lift = Support Support (X) × (Support (Y)
This selection was made based on the fact that women score more poorly in digital literacy tests (EUROSTAT, 2019) (Jiménez-Cortés et al. 2017). Moreover, teachers (including the higher education ones) have been reported not to utilise the available digital tools which would enhance the learning outcomes (De Pablos Pons 2010). The lack of proper digital skills was pointed out as one the reasons for such a situation (García-Pérez et al. 2016). Finally, universities in Poland were made to go online by law since 12 th March 2020 (Ustaw 2020),(of Science and Education, 2020) Thus, the women working at universities, known to be forced to work remotely, should show the increase in their levels of digital literacy and cybersecurity awareness.
---
The research methodology and the design of the data gathering process
In order to conduct the study, a questionnaire was constructed. It consisted of three parts, the first one asking the questions about the scientific title, the scientific field the studied person worked in, their age, place of residence (a city or a country) as well as if they worked as university teachers/educators, scientists, or both. The subsequent section concerned the fact if they worked remotely, and if they did not, the survey finished. Then, there were the questions on the perceived (self-reported) level of the digital skills of the studied individual; i.e., the question about the perceived general level, the five components of digital literacy according to the DigiComp 2.0 Framework and if the employer helped/supported them in gaining the necessary skills. The section finished with the question if the person felt their perceived level of digital literacy increased during the COVID-19 pandemic.
The last section of the questionnaire touched upon the aspects of cybersecurity. The first question aimed at checking whether the studied person felt safe when using the cyberspace and its tools. Then, it was checked whether the person's employer had made them aware of the possible threats to their assets and privacy, that may come with their remote work. Finally, in the last question, the individual was to assess if their perceived level of cybersecurity awareness increased during the COVID-19 pandemic.
The primary version of the questionnaire was an online tool. It was distributed amongst the people belonging to the target group by various means: using instant messages, Facebook groups, or asking the individuals personally and then sending them the link to the questionnaire via e-mail. Each person was informed of the fact that the study was anonymous, GDPR compliant and no personal data was collected/saved (Pawlicka et al. 2020b). This broad information gathering campaign took place in October 2020.
---
The dataset at a glance
Altogether, 380 women responded to the questionnaire. To the authors' knowledge, this has been the first scientific study of the digital skills and cybersecurity awareness of women academics and scientist in the times of the pandemic, and one of the largest studies of digital competences and cybersecurity awareness of women academics in general.
---
Subjects from whom data was collected
---
Scientific title
The majority of respondents hold the doctoral degree (63%). 18% of them has the PhD/ScD hab. ("habilitation"-a title used in some countries with seniority between a doctor and Full Professor) title; 17%-MA; 2% are full Professors, and 1% of them hold other titles.
Scientific field The majority of the studied women deal with social sciences (41.6%). Almost a quarter of the respondents (23.5%) work in exact or natural sciences. The third largest group belongs to humanities -19.8%. The remaining fields were: engineering or technical studies-8.7%, agriculture-4.1%, art 2% and theological studies-0.3%.
Age of the respondents The studied women were aged from 25 to 70 years old. The average age was 38.8 years old; the median age was 38.
Place of residence The vast majority of respondents live in the city (89%).
---
Digital competence
General level Most respondents assess their perceived level of digital competence in a positive way, as they chose either "very high"(27%) or "high" (56%). Only 1% of the respondents believe their overall level of digital competence is low. It is worth noting that no respondent found the level of their digital literacy to be very low. The rest of the studied women (16%) believe they have average digital competence.
Then, the level of the particular components of digital competence (according to DigiComp 2.0) was measured. Each of the respondents was given the description of the component, along with the examples of practicing it. They were supposed to assess the level of competence within a particular component using the grade from 1 to 5, where 5 meant"very high", 4 -"high", 3 -"it is hard to tell", 2 -"low", and 1 -"very low".
---
The assessment of the digital literacy components
Figure 1 shows the imbalance in respondents' digital skills; there are distinct differences between the perceived levels of particular components of digital literacy. Communication and collaboration skills scored the highest, whilst the average score for safety was almost a point lower.
The employers' support The next question concerned the fact if the respondents' employers have supported them in their transitioning to working remotely using the Internet; by providing various forms of training, educational materials, tutorials or by providing access to resources/helpdesks. Almost three quarters of employers (73%) have done it.
The pandemic and the increase in the perceived level of one's digital skill When asked if their perceived, overall level of digital competence had increased during the COVID-19 pandemic, almost two thirds of the respondents (61%) answered "definitely yes" or "rather yes". Only about a quarter (26%) of the studied group believe their digital skill levels have not increased ("rather not" -22% and "definitely not" -4%). The rest is not able to assess if their skills have increased or not.
---
The level of cybersecurity awareness
The following part of the study concerned the aspects of cybersecurity.
Feeling safe when working online Most respondents (57%; "definitely yes" -6%, "rather yes" -51%) feel that both them and their property are safe when they use the Internet for working online. About a quarter of them cannot decide whether they feel safe or not. The remaining 17% do not feel safe when working online (14% -"rather not" and 3% -"definitely not").
The employers' role in raising the cybersecurity awareness level When asked if their employer had made them aware of any cybersecurity measures or possible cyber threats, most respondents (64%) denied.
The last question concerned the fact if the respondents felt their level of cyberse-curity awareness had increased during the COVID-19 pandemic. The greatest group believes it has not raised (43%, "rather not" -35%, "definitely not" -8%). About a quarter of the respondents (26%) find the awareness level to have increased ("rather yes" -22%, "definitely yes" -4%). Almost one third of the studied women (32%) cannot decide whether cybersecurity awareness level has shifted, or not.
---
Part II: The data mining process
As evident from the data gathered, most respondents (over 60%) believe their digital skill level has raised during the pandemic. One may believe that this shift is caused by people having to deal with the digital issues by themselves, as suggested in Jeleński (2020). However, in almost 75% cases, people admitted to having been supported by their employers in transitioning to working online, too. At the same time, only about a quarter of them believe their cybersecurity awareness level has increased when working remotely via the Internet, and most respondents claimed that their employers had not made them aware enough of the cybersecurity matters. In order to find out whether there actually is any relation between these factors, the data was processed with the association rule mining algorithm.
Experimental setup and preparing the dataset In order for the dataset to be ready for applying the algorithm, all the data items were changed to categorical ones. The values for minimum support, confidence and lift were selected in an experimental way, in order to reflect the associations in the most accurate way possible and economise on the computational time. The final values were: minimum support = 0.025, minimum confidence = 0.3, minimum lift = 6.
The results After the data was processed by the apriori algorithm, a list of 11 association rules was created. Then, it was sorted according to the lift value, i.e., the higher the value, the stronger the association, according to the algorithm. The results have been presented in Table 3.
---
Discussion of the results
The COVID-19 pandemic has influenced almost every aspect of people's lives. Many routines and activities have moved to the online domain and may remain this way for a longer time, maybe even forever. This shift has challenged people's digital literacy and made them think about their online assets being secure. The conducted study aimed at checking if both the aspects improved, as a surprising side effect of the life-threatening global pandemic, and which factors are associated with the self-reported digital skill
The average score of a given component level and cybersecurity awareness level/feeling safe when working online.
---
The answer to the research question 1
As anticipated, the perceived level of people's general digital literacy has raised during the COVID-19 pandemic; over 60% of the respondents believed so. The association rule mining study, however, did not find a significant association between having to work remotely and the actual increase in the level; rather, it was related to the employers' support and workplace trainings.
---
The answer to the research question 2
Despite being forced to utilize online services in almost every aspects of their lives and becoming almost totally dependent on the Internet, people do not seem to have become more aware of the possible cyberthreats and the cybersecurity measures aimed at preventing them. This should be worrisome, as the study of the particular components of digital literacy had shown safety is the aspect which people know the least about. There is a concern to be had, as at the same time, they feel relatively safe, which might lead to them not being alert enough and falling victim to various types of cyber exploits. Only about a quarter of the respondents believe their cybersecurity awareness level has increased when working remotely via the Internet; this may be related to the fact that, as it turns out of the study, most employers seem not to put enough emphasis on the cybersecurity-related matters. A similar conclusion was reached as a result of the association mining study-the respondents who were made aware of the cybersecurity matters by their employers, reported to be feeling safe and secure when working online. This is the clear hint for the employers that they need to include the cybersecurity awareness-enhancing frameworks If a person feels "definitely safe" then they "definitely" had considered their online safety, "definitely" try to protect themselves online and had never fallen victim to cyberattacks.
0.028871391 0.47826087 8.677018634 2 2 If a person feels "definitely safe" then they "definitely" try to protect themselves online and their employer had made them aware of the cybersafety issues.
0.028871391 0.47826087 7.592391304 3 9 If a person feels "definitely safe" then they "definitely" try to protect themselves online, their employer had provided them with support and/or training of their digital skills and their employer had made them aware of the cybersafety issues. If a person feels "definitely safe", then they then they "definitely" had considered their online safety, they live in a city, "definitely" try to protect themselves online, their employer had provided them with support and/or training of their digital skills 0.026246719 0.434782609 6.1352657 11 5 If a person feels "definitely safe", then they then they "definitely" had considered their online safety, "definitely" try to protect themselves online, their employer had provided them with support and/or training of their digital skills 0.028871391 0.47826087 6.073913043
in their training/workplace education agenda. In the long run, this will help people protect their data and property when working online.
---
The answer to the Research question 3
As expected, the youngest respondents were the ones who reported to be having the highest level of digital skills; this goes in accordance with the previous findings. The algorithm also found a strong the association between the age and being a MA; this came as no surprise, as the respondents aged 26-29 are usually in the process of gaining their PhDs and further titles.
---
The answer to the research question 4
The level of one's perceived security when working online is strongly associated with the fact if their employer provides digital skills and cybersecurity-related training and support, if they consider their safety and make active efforts in order to protect themselves.
---
Threats to validity and the limitations of the study
As for construct validity, it is believed that the language of the questionnaire and questions were well understood. This aspect was evaluated by asking 5-10 first subjects (who were known to authors) to answer the survey. Regarding external validity, the study was conducted amongst women who are educated (as academics) and mostly dwell in cities; the authors are aware that the study did not encompass the representatives for lower classes/education level. In order to obtain the fullest picture of the matter, the study would need to include the people belonging to the lowest class, living in the country, as well as female primary school teachers. The authors have already planned to conduct such a study in the upcoming time and employ other data mining algorithms, and will share the updated results afterwards.
---
Conclusions
---
Practical and theoretical implications
There is no doubt that the pandemic-related crisis, which has also resulted in an economic crisis and increased unemployment, has revealed digital and technical skills deficits. In the case of this work, they were uncovered among women working in such an important area as science. Women employed in this sector need to acquire digital competences if they want to retain their job posts, but also if they wish to teach today's young people. Generation Z, iGen, iGeneration, generation XD-these are the terms used to describe the digital generation now entering adulthood, who do not know a world without the Internet and prefer to spend their time on the phone rather than amongst other people. This generation is composed of the people born after 1995 (the year the Internet was commercialised), or 2000. They are the first generation to have permanent access to the Internet.
In complementing and upgrading their digital competences, some of the female scientists surveyed are able to cope on their own, but others need immediate help in the form of action from employers (universities, colleges). This issue implies further analysis on the educational role of organisations.
Complementing qualifications and skills in this area is also necessary due to the phenomenon of technological exclusion, which increases dramatically during the pandemic period. The women surveyed need to intensify their professional development in this area in order to equalise opportunities. There is no other way to keep a job.
The article also highlights aspects related to digital security The insufficient level of competence in dealing with digital threats points to the need for awareness and education, especially since the scale of threats from the digital world is constantly increasing. Not everyone is able to increase their competences quickly enough through individual selfeducation, which is necessary to deal effectively with multifaceted e-risks.
Education in digital competences, including education in digital safety, is an action which should be taken as soon as possible in the Polish higher education environment in order to mitigate the negative effects of the crisis, but also to use this moment to equalise the chances of women on the labour market in the digital economy of the future.
SPARTA, a Horizon 2020 cybersecurity pilot project, funded by the European Commission, is an example of such a desired initiative, as a significant effort is placed in it to help tackle both of the challenges presented in this study. A range of actions undertaken and systems are being put in place to help mitigate the "gender gap". This starts with embedding actions inside the project itself, and then communicating the principles and values outside the project to help specifically address the female public, in both dissemination and communication activities. The female participation in all training related activities during the project is prioritized, so as to focus on and incentive female participation, involvement and uptake. Female mentorships programs within SPARTA partner cybersecurity research teams are being created. The project also strives to understand and correct social barriers related to female participation in all levels of the cybersecurity workforce. The results of this investigation illustrate the immense importance of innovative initiatives like the "Women in Cyber Campaign" implemented in SPARTA (Lindner et al. 2020).
---
Final remarks
This particular study concerned women-scientists and/or university teachers, thousands of whom had been forced to start working online when the pandemic broke out. Most of the respondents report that their digital literacy has increased. As women used to score more poorly in the digital skills tests before the pandemic, it may turn out that the global crisis contributed to their gaining more competence. It might even turn out that this will be one of the very few benefits of this terrifying situation. However, the shift did not happen by itself. The association rule mining study has shed some light on the significance of the workplace training and support; there is a strong relation between the employees feeling their competence rises and the fact if their employers provided them with the opportunity to gain the skills.
Moreover, people's cybersecurity/safety skills scoring the lowest, along with the lack of emphasis on the cybersecurity matters at workplace are alarming. Cybersecurity skills need to be addressed more, as, the lack thereof may lead to disastrous results, like personal harm or financial losses, even after the pandemic finishes.
Finally, cybersecurity awareness is not something which appeared alongside the pandemic-related spike in the amounts of cyber-mischief. Apart from the support at one's workplace, a person does need to make active efforts in minding and protecting their cybersecurity level in order to feel as safe and confident as possible when working in a remote manner.
---
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Medical anthropology, an interdisciplinary subfield of mainstream anthropology, has become the most popular and potential knowledge that receiving its brawny figure at present world. Biomedicine, a strong concept of medical anthropology includes: the scientific medicine, allopathy medicine, modern medicine and the regular medicine that basically focuses on human biology, pathophysiology, western knowledge and modern technology. It has become globally dominant during this century and wellpracticed in developed, developing and less developed countries. This study has explored, what are the local nature, practice and perception of western biomedicine in rural area of Bangladesh. This research paper systematically searches, what type of challenges have to face by the rural women to get access to biomedicine or modern medical service in local hospital and clinics. What a modern technology and scientific knowledge-based treatment system impacted on rural women's biological and mental health, this paper explores that. Applying Critical Medical Anthropology (CMA) Perspective, this paper has sorted out to identify the socio-cultural and political-economic aspects of biomedicine and what are the impacts of biomedicine on rural women health. What kinds of health problems, rural women are facing and what are the ultimate result of it as practicing biomedicine that will be analyzed here. Using Critical Medical Anthropology (CMA) this study has analyzed the power relation that is found in biomedicine and biomedical health seeking in local hospital and clinics that has become the primary area of social control. This article is a written interpretation of an anthropological fieldwork that was done in 2017, in Kaiba village at Sharsha Upazilla of Jashore district of Bangladesh. | Introduction
In the twentieth-first-century, whenever we think about health, disease, health care system the images of physician, medical care, hospital have appeared in front of us. The scientific approach of studying health and medicine is called biomedicine. The explicit application of biomedicine is found at hospital and clinic where, doctor and surgeon directly apply their biomedical knowledge on patients' body to identify disease. They try to connect it with scientific and natural causes and apparently its effects. Biomedicine is referred as scientific medicine that is implicated with biological principles of health practices (Gains & Floyd, 2004).
Hospitals and clinics are suitable places to practice of biomedicine. A significant links can be traces out between human health and healing places like hospitals, clinics and Dispensaries (Quirke & Gaudilliere, 2008).
The hospital-based health service was originated in ancient Rome and Europe where the honorable Priest had played an important role serving sick and dying patients. One of the main duties of them, were to deliver health care services on the basis of religious touch with spiritual causes. They tried properly, to heal the patients at their guesthouse. After Renaissance, the nature of treatment system had been changed. On that time, in the cases of disease observation, only the physical and worldly causes were found, discoursing the spiritual causes and beliefs.
After the Enlightenment, hospital based medical services had promoted as prestigious where the professional doctor and biomedical surgeon played a vital role. They had achieved authoritative position in health seeking in western and non-western countries. Over the time, biomedicine began to play a dominative role and it began to change its nature because of its multiple forms. (Shah, 2020, p.5).
---
Biomedicine, Health and Women Health Seeking
Biomedicine, disease, sickness, health and health seeking process are strong concepts of medical anthropology, whose occupy a vital space not only in academic world of knowledge but these are confounded with our diurnal life. Biomedicine is a name of western biological and scientific medicine that emphasizes on biology and fact. Scientific medicine, allopathy, cosmopolitan medicine and technology-based medicine are named as biomedicine (Gaines & Floyd, 2003, p.1). The term biomedicine was first appeared in Britain, 1923 in Dorland's 'Medical dictionary. It indicated the clinical medicine on the basis of the biochemistry and physiological principles. In the late 19th century, various scientific technology shaped, reshaped and influenced western medicine system which are widely known as biomedicine. (Quirke & Gaudilliere, 2008).
Biomedicine refers to western and modern medical systems. According to anthropologist Hahn, these types of modern treatment methods are called 'biomedicine, (Baer et. al, 2003, p.11). Biomedicine revolves around anatomy, pathology, diagnosis, administration of drugs, administration of therapy and surgery. Biomedical treatment service is very close to biomedicine. Biomedical, also relating to medical, physiological and scientific knowledge. It is the process of modern medical services, provided through the use of scientific and modern equipment. Biomedicine also including biomedical doctors, pathologists, and neurologists. The physicians and doctors consider disease only in the light of pathology. They studied disease, based on the Cartesian duality of body and mind. Biomedicine originated in America, in the 19th century and it had recognized as the authoritative medical system in Western society (Baer, 1989(Baer, , p.1105)).
Health means being healthy and well both of physically and mentally. Health refers to the healthy living and absence of disease and illness in human body and mind. WHO refers health does not only mean the absence of disease but refers to the physical, social and psychological well-being (Brown, 1998, p.11). Women's health refers to their physical and mental health, indicating the absence of sickness in their body. Reproductive health is a much larger part of female body. The protection and wellness of reproductive health also involved with health of women (Begum, 2015).
In patrilineal rural household, health is not determined only by the women, but health seeking behavior is guided by men. Men have their authority over women for their higher social status, in a patrilineal society (Reiter, 1975, p.53). This notion is also relevant to Edwin Ardener's view that is considered as 'muted group theory'. It indicates, the women in patrilineal society, belong to muted group and their expression of worlds view's, feelings and independence are blocked ascribed by the dominant structure and the views of society (Ardener, 1975b, p.213).
In Bangladesh women's health is determined by socio-economic and political factors (Paul, et. al, 2014). Besides, gender inequality in health sectors and different services, accelerate and entitled women to poor health. Worldly, 43% women and 65% pregnant women are suffered from Anemia (Allen, 2000). The observable women fall in low blood pressure, anemia, lack of vitamins and minerals. They recall biomedical doctor, when they face any accidental facts, like broken limbs, (legs, hands, and waist), disease caused by pain (migraine, headache, back pain, joint pain, and neck pain), stroke and different complexities related to puberty and reproductive health. Among the respondents, there are 75% sick women, though they have interest to take biomedicine but they cannot receive it at the primary level of sickness, without the permission, assistance and the proper guidance of male members of family. It is observed that, the women (house wife, student, young unmarried) taking permission from husband or the elder male person to go to the biomedical doctor's chambers, clinics and hospital in the nearest bazaar. They go to the district level health sectors to take biomedical treatment, along with their male companion and their direct or indirect supervision. Besides, there are many socio-cultural and political factors that influence their decision, are taken into a consideration for the sick women, whether they will take biomedicine or not.
---
Medicalization, Possession of Medical Knowledge on Women's Physical and Mental Health:
According to Conrad and Bergey, they state that, "Medicalization refers to the process by which various aspects of human beings are considered as medical problems. The adoption of modern medicine, the process of medical care, where power relations are involved, called as medicalization (Ember & Ember, 2004).
In sociology, the term 'Medicalization' was first appeared, that examined human condition. In sociology and anthropological perspective, "Medicalization is a process that refers some non-biomedical problems as medical problems, which requires medical treatment (Conrad & Bergey, 2015, pp.105-109)." Medicalization refers to the role and power of control of medicine on human's body. Feminists' anthropologists refer that, "Medicalization is a patriarchal process in which women's bodies are intervened by modern technology and trained male doctors" (Ember & Ember, 2004, pp.116-118).
In biomedical services, disease is understood only in the light of biology, anatomy and pathology. Social and structural elements of the disease are not considered. As a result of receiving biomedical services, the patient's body comes under the control and monitor of certain medical knowledge, technology and modern equipment. The female body is not free from this subordination. Subordination indicates the low status and showing something as weak, inferior by superior rank and position.
Biomedical knowledge, considers its own knowledge to be superior and prestigious, the knowledge of other health care services such as traditional health care knowledge, natural knowledge of disease etc. as inferior. Biomedicine has such a controlling power and hegemony.
Under the biomedical knowledge, woman's reproductive health is thought and treated as a machine (Martin, 1987, p.146). Biomedicine undermines women's natural knowledge and experience of childbirth. It has reshaped the birth, by caesarean section. The knowledge of biomedicine has taught women that if they have fever, headache, and rheumatism, then they will not just be taking rest or taking health care from home or traditional medical care. Because it will not cure them from sickness. So, they should go to the hospital to see a biomedical doctor and take medicine. Besides, various tests like: blood test, body temperature test, medicine and medical care should be taken. Thus, the females' body is subjected to biomedicine and the control of biomedicine is observed in their body. It has brought about some sort of interference in the normal movement and living of women. Besides, the technological fault brings negative impact on patient's body. Sometimes, it increases the outbreak of disease rather than minimizes it.
The authoritative knowledge of biomedicine has not only taught the rural women but also has to believe all of us that if you face disease, you will go to doctor. You must have to take biomedicine with biomedical treatment. Most of the cases, biomedicine has informed both of rural and urban women that at present, they are unable to give birth baby in natural way which is known as normal delivery. It claims that if they accept the labor pain of normal delivery and try to give birth at home without going to hospital or clinic, not to take biomedical treatment; it will put their health at stake. Biomedicine has brought authority on the natural world of women and also controls their normal thinking of daily life (Sultana & Nur, 2004, p.21). Obviously, it has a control on observable women's mental and physical health. So, they are afraid to give birth at home. To avoid the risk of life, they go to doctor in hospital and clinic, but they are at stake there too. Moreover, the high cost of biomedicine makes women anxious, that not only affects women's body but also their mind. After all, the controlling nature of biomedicine, cuts and wounds on studied women's body, some technical errors due to technological faults in local hospitals and clinics are pushing the rural women's health at dire straits.
---
Motif of Research
This study has observed that the local nature, practice and perception of western biomedicine in a rural area of Bangladesh. How the rural women perceive and practices of it, what kinds of challenges, risks and crises they are facing to take it, are analyzed here. After taking biomedical treatment at local hospital and clinics, what are the ultimate result of it, are analyzed. How the authority and power of biomedicine imply some controls on rural women's health, are explored here. This study has analyzed the political economy of health to understand the impact of biomedicine on rural women's health in a broader context. Beyond, the doctor-patients unequal relationship that exits in rural hospital and clinics, also analyzed. The internal political environment of local hospital and clinics are not conducive to women's health. This article describes the rural women's experience of sickness and health seeking behavior. This research, has tried to depict all of these issues, from a Critical Medical Anthropological (CMA) perspective.
---
Field Research
Fieldwork is the vitality of anthropological research. This study is primarily based on fieldwork that was completed from July to September in 2017. This research was conducted for academic purposes. The research field was located in Sharsha Upazila of Jashore district that is far about 270 kilometers away from the capital city of Dhaka in Bangladesh. I choose, this village as a field, because of my former acquaintances of the villagers. As a result, it was very easy for me to build a good relationship with them. After entering field, I supported informal strategy and engage myself to talk with the villagers that helps me collect huge data about women's health and the uses of biomedicine. I observed that the villagers take biomedical treatment including other medical systems like homeopath, ayurveda, natural remedy, herbal remedy and religion base treatment. I used to gossip with the women of different ages like adolescent, young, middle aged, and older about various issues. It was important for me to capture a clear thought about their health seeking behavior on the basis of their different experiences. I had talked not only with the women but also with the men with whom, those were important and possible for me to talk. It was a village area and I was conscious about local norms and values. To collect profound data, the informal strategy is suitable and the acceptability of researcher gets easier. The anthropologists called this approach as 'big net approach' (Fatterman, 2010, p.35). About 370 household, I randomly selected 40 households. I selected 40 participant informants using purposive sampling method. Out of which 10 were male and 30 were female. Since the study is focused on women's health, I selected more female informants in the selection of informants. I selected male informants to understand how men hold their view on women's sickness in the household. I did not used fully Participant observation method, but carefully observe their activities. Participating observation is the method of participating in their life-style and then profoundly observing their activities (Bernard, 1995, p.138).Their emotions, feelings and experiences toward health seeking behavior were realized through the anthropological lens and of participants of this study. I talked to different patients who were admitted in local clinics and hospital. They share their experiences and feelings about biomedicine and biomedical treatment.
Structured strategy was taken for me with some specific questions for the persons who were busy with their respective profession like doctor, nurse, hospital attendants, etc. In semi-structure interview, I prepared a written list of questions. I interviewed informants according to a list of written questions along with a little anecdote. Village men usually left for work early in the morning. That's why I take semi-structured interviews from them. Unstructured interview was done while the women were coking and gossiping one another in their household yard, I took part with them. At one stage of gossiping, they shared with me different ideas about sickness and health care. This interview is called as unstructured or open-ended interview. Key informant interview method was followed to conduct this study. A young man of 27 years old and young women of 19 years old were main key informants of this study who introduced me with other participants. The primary data of this study come from fieldwork experiences and the secondary data come from various books, articles, journal, and literary works. Some sensitive and important data about biomedicine and women health are analyzing this study with case studies, narrative analysis and life history analysis of participants. Life history is essential for data collection and it is called as the descriptions of personal life that emerges through observation and interviews (Denzin & Linclon, 2000, p.39).
---
The Practice of Biomedicine in Bangladesh: Study Area
The expansion of biomedicine in the Indian Subcontinent can be traced back to the colonial period, to protect the health of colonial ruler (Sultana & Nur, 2007, p.16). As a developing country, Bangladesh is not free from the influence of biomedical treatment system or biomedicine. In the Post-Independence period, many hospitals and clinics were established in this country, for the development of health sector where the institutionalization of biomedicine had begun. The doctors and the nurses had practiced biomedicine and treated the patients at clinics and hospitals. Professional physicians and surgeons get their dignity. Today, biomedicine has formed so well-organized and institutionalized that it is dominating the herbal medicine, ethnomedicine, homeopath and alternative medicine. The western hegemonic knowledge and biopower of biomedicine and the exclusive adaptation of it by the so-called privileged class of society, which led to a widespread belief that there is no pair of biomedicines in the field of health care. However, the ideas that the biomedicine always ensures health protection, the universal validity of it and the efficacy or competency of it to cure all types of diseases and sickness are not always valid and suitable to all. Rather, the principle of mind-body dualism of biomedicine that separates body from mind and the disease study system based on physiology also contain some inconsistency on patients' body from time-to-time.
Not only in urban area, but also in rural area of Bangladesh the use and practice of biomedicine are found. The environment of rural households of Bangladesh does not exist above disease and illness. These are not free from the use and the impact of biomedicine. Now, biomedicine has taught us all to believe that if we have a disease, we have to go to doctor, seek medical care and to take medicine. When a member of household feels sick, the first thing is to think about what kinds of health services have to provide to the person that he or she might be recover very quickly. Biomedicine is an immediate disease preventive. Although the people of rural area know that the cost of biomedicine is huge, yet they accept it to relief from the outbreak of the disease. The women of rural Bengal also take biomedicine when they face various health related complexities.
Today biomedicine has become more dominative and prestigious health remedy rather than other health care systems like traditional medicine, homeopath, Ayurveda, ethnomedicine (medicine of a particular culture, group, community and society) etc. Biomedicine and biomedical practices are related to power hierarchies which occupy both in public and private health care sectors (Clinics, hospitals, health service and care centers) in Bangladesh. Zaman (2005) shows three types of public health care system in Bangladesh where different hierarchy belong on the basis on specialization of knowledge and power. Medical college and hospitals belong to tertiary level whereas district or urban hospital belongs to secondary level and the third level of hierarchy exists in the upazilla health complex. Although health policy is organized and developed but these are less effective to ensure good and equal health care to all people (people of all ages, classes, sectors, areas, gender etc.). Faruk Shah (2020) has shown, qualified doctors are available in urban areas and the number of skilled doctors is proportionately less in rural areas. Rural people face some social, economic and political challenges to take health care services from a skilled doctor.
The inhabitants of Kaiba village practice both of biomedicine and alternative medicine (Homeopath, Ayurveda, Kaviraji and natural and herbal treatment system. Biomedicine gains its popularity and it has received an authoritative position over alternative medicine. Taking biomedicine has become a useful care for some common disease like cough, cold, fever, inflames, aches and various virus and bacteria related chaos. The men, women and children received biomedicine in observed family. As the women heath are the main concern and area of analysis, the current study has emphasized and observed on different ages of women in middle class family and their health seeking behavior. It was approximately 40 households, selected purposively, who were practicing different kinds of medicine. Most of the cases, it is observed that, when the male feel sick or unwell he can easily move to a biomedical doctor with full of cautious but when it is a matter of a female health issue, it takes some delay. As well as, biomedicine is costly; various socio cultural and political factors are mingled with it. Taking biomedicine is a matter of economic consideration, which directly and indirectly depends on male's decision. The women, who have less economic solvency, can easily receive alternative medicine (homeopath, ayurveda, kabiraji, herbal and natural) as they are cheaper than biomedicine. As, biomedicine is directly related to high economic factors and men's decision, sometimes, they are depriving of it.
---
Rural Practice of Biomedicine
The appearance of biomedicine is scientific and technocratic. The modern feature of biomedicine is shaped and reshaped in rural area of Bangladesh. The global view of biomedicine is localized in rural area according to its situational nature (Shah, 2020). There is lack of qualified and registered doctor rather than city areas. Zaman (2005) shows professional and qualified doctors are engaging themselves in private practice in city areas. In observable areas, there is a public hospital where the registered doctors have presence but the numbers of doctors are much less than the huge numbers of patient's needs. Yet they are receiving healthcare from there and facing multifarious socio-economic, cultural and political complexities. In the public hospital, the people can take service from a professional doctor only paying 10 TK for per ticket.
Professional surgeons give services at their private clinics which are far away from the local bazaar where they receive better remuneration for giving better treatment. Economically well-of patients take service from them. Besides these, there are one village doctor who provides health service from one household to another household. The other 5 quacks, who are not registered but also familiar as village doctors on the basis on their quackery experience, have their personal chamber in Baganchara bazaar. Though they have no medical training, but they pretend they are skilled on health and medicine. They play a vital role in rural health care, as the villagers can easily receive treatment from them.
---
Nature of Local Biomedicine
The study area has developed by one government hospital, two private clinics, private chambers and some pharmacy where the allopathy medicines are sold. Respondents claim that in public hospital, there are lacks of healthy healthcare. 35 years old Majeda begum said that the environment of local hospital and clinics are substandard. It is observed that the outside of the buildings is classy but the inside is damp. The observed government hospital consists of 1 four-storied and 2 two-storied buildings. One of my informants Zamila said there is a large women ward in upazila health complex that covers a fusty smell. I also observed that there is dirty cotton full of red blood covering the basin of women's ward. Though there are 32 beds but only 2 bathrooms that are full of gatherings and a long line of patients. There are some internal complexities of hospital when anyone wants to take treatment from a skillful doctor, surgeon and to take a seat in the ward and cabin. Sumaiya who was a university student told me, baksheesh (gratuity) is essential if you want to take service from a doctor (experienced and skilled doctor). Baksheesh varies on 100TK to 500TK or more than these, depending on various cases and complexities. Otherwise, you will wait for hour after hour standing in a long line and it will be fruitless. Because the doctor will go when it will be 2 pm., or at least 5 p.m. They will go their private practice chamber at due time. The severe diseased patients are enforced to give baksheesh to the doctor's assistant. In the one hand, the well-off women enrolled themselves very carefully in the existing economic and salient political structure in the local hospital, one the other hand the sick and less well-of patients suffer more spending plenty of time in a large line. Many of times, the sufferers back home and they search for alternative treatment, like kabiraji, homeopath herbal etc. those are existing in the local area.
---
Case Study
Zamila begum who was 48 years old and she was a housewife suffering from severe back pain for 8 years. She went to a govt. hospital but there was a long line. After spending two hours she observed a rich woman paying some extra money to the assistant of doctor then she easily entered into the chamber. Zamila begum had shown an interest for paying a tip or baksheesh to him but her husband ignores it as the money was limited for themselves. After spending 4 hours she felt severe pain attacked in her spine and she cannot stand any more. She backs towards her home with a heavy flow of sweat and pain. After 3 days, she took medicine from a village doctor who suggests some pain killer that removes her pain but faces severe gastric problem (Fieldwork, 2017). Thus, the unequal management structure of hospital and the inter political perspective of biomedicine directly or indirectly effects women's health.
---
Apprehension about Doctor's Presence in Hospital and Clinic
The rural people face some tension and hesitation if the doctor presence or not in the local hospital and clinic. I also observed that the doctors are not always present in local clinic and hospital. If one goes to the hospital after a certain time the doctor will be absent. It has become like a statutory rule in the local biomedical treatment system. Though the patients' physical complexities increase and these go to a critical level, they have to wait for the doctor's arrival. There are hospital, clinics and huge number of patients but the proportion of doctors, nurses and qualified surgeons are less, compared to the patients. It is very difficult to reach them after a certain period of time that holds some serious risk factors on rural women's health. Sometimes, they embrace with the death but they never see the face of a doctor.
During the fieldwork the health seekers of Kaiba, mostly felt uncertain whether the clinics were open or closed. Most of the cases, the clinics were open but the doctors were absent and busy their private dispensaries to provide service. Ayesha, Begum suffering from a serious itching in her whole body and went to the nearest clinic at 10 am, but the doctors arrive there at 11.30 am. There was a big gathering and long line of patients. About 1 pm she gets the opportunity to visit the expected doctor. She told me to visit an expected doctor is time consuming here. Standing about 2 and 40 minutes I would able to face him. She had to endure big sufferings, heat and crowd which aggravated her itching and allergies. She starts a heavy sweating and it's increased her inflammation. After a lot of trouble when she returned home at around 3.00 pm, she had to listen to various harsh wards from the senior members of her family, husbands and her mother in laws for coming home late from the clinic and did not getting the household works properly.
---
Case Study
Her name was Selina khatun, who was 23 years old, belonging to a middle-class family. She was a housewife and lives in my study area. During my fieldwork, I suddenly heard she had been felt severe pain in her abdomen in the middle of the night. She was taken to the nearest clinic about 2 am. As it was late night, the doctor was absent. Her condition was critical. In the meantime, the relatives of Selina made emergency phone call to the doctors but he could not receive his phone. This time, the assistant of doctor, though he is not trained or educated in health and medicine, he pretended that he is the main figure and all in all in the absence of doctor in the clinic. I was curious about his educational background and after someday of that incident I got to know, after completing honors degree on management he had joined at this great profession. It brings a quite surprise to me. However, after sometimes he pushed two injections to Selina simultaneously, without any testing. The next day I was curious about Selina's condition and visited her home. Unfortunately, this case turned into a great misery. Selina begums relative explain, after pushing the injection Selina slowly felt asleep. In the morning the whole body of her become cold, gradually her relatives realize she was no more. They belief she had died due to absence of doctor and the lack of appropriate medical care (Fielwork, 2017).
---
The Health Services and Patients Expectation
There are three heath care centers near the study area. One is the govt. hospital located beside the union parisad office and another two are the private clinics. The rich, middle class and the poor people take service from it. The service is cheaper than the private clinics. Though the outside and inside environments are less clean, the treatment is properly given. The respondent said they felt obsessed if the hospital remained closed for any occasion. Then they were compelling to go and search for private clinics, private chamber of doctors or the alternative medical care (homeopath, ayurveda, herbal etc.). Another female informant was suffering from menstrual problem and felt pain for 6 months. She went to a private clinic. The doctor suggested her to take different tests from a particular diagnosis center which were more costly. She went there and spending about 4000TK. She took all of her physical tests and went to the doctor to show the report. The doctor prescribed to her some medicine and after taking it about 3 month her condition was irritated. Then she went to the govt. hospital following the advice of her family members. She told me that doctor Apa (the female doctor of hospital) was very good. I spend only 450TK for buying medicine. Now I feel better. But the cost of treatment in a private clinic is much expensive.
---
The Cost of Biomedicine in Rural Area and the Complication of Its Management System
The cost of biomedicine varies region to region and area to area. If it is one-way to developed country, it is different in developing and less development country. In Bangladesh, this difference exits in very parts of country and even in rural to urban areas. In the observed area, there are two private clinics and a governmental hospital where the lack of healthy health care facilities exits, according to informant's opinion. There are some complications of management systems that have emerged through deep observation. It creates some risk factors on women's health. There are some differentiations of outward and inward environment of govt. hospital and private clinics. I realized a sultry environment in the women ward of public hospital. At the time of interviewing informant, I was informed about the fusty smell. There was some bloody cotton in the bathroom. Two basins were filled with dirty water. The women ward had 32 bed and only two bathrooms. The patients of C-section wait long line for bathrooms which increases their physical and mental pains. One of the patients of C-section told me that I am waiting here about 20 minutes. Today is the 5 th day since I have done my surgery. I have a lot of pain in my groin and lower abdomen. I feel difficult to stand like this. Sometimes, my new born baby also cries a lot, without me in the bed. Private clinics provide relief from such discomfort situation but those are very expensive. Doctors are highly educated. Most of the patients like them to save money. It is cheap and they want to get good treatment. The respondents of this study claim, although there is a rule to give half of the medicine free in the government hospital, the patients have to pay some money to the ward staff to get the necessary medicine. Expensive medicine has to be collected from the pharmacy bearing by the own cost of patients. Due to the increase of the cost of biomedicine and biomedical service the rural women cannot accept it though they have good will to receive it. Biomedicine is treated as the immediate curative care. According to Hans Baer, "It spends much more money in the clinic, hospital, drugs and miracle cure. Critical medical anthropology (CMA) claims, biomedicine must be seen in the context of the capitalist world system and the profit-making orientation (Baer et al, 2003, p.40).
---
The Cost of Medical Services in Government
---
Doctor-Patients Relationship
Using CMA this study emphasizes on doctor-patient relationship, analysis it from social, political and economic power relation. Hospital has become a primary area of different social relations (Baer et al, 2003, p.42) Biomedical physicians enjoy and occupy their professional dominance of power. According to Friedson, the professional dominance of biomedicine is strengthened and maintained by the political and economic dominance of elite, where the others are to be subordinated to this profession (Friedson, 1970, p.5). Doctors have the monopoly of power for their lucrative medical skills. The physicians perform the key function of controlling the sick role of patients. In this sense, they are superior and the patients are subordinate to them. This is the main point of analysis of this study. I observed doctors are superior for their strong position in rural hospital, whereas, the rural patients, especially the women have less dignity to the doctors. Obviously, they maintain good relation to patients on the basis on class, status, and their personal relationship. Sometimes, the unequal relationship of doctor and patients impacted on health of observed women. Respondent claim, they cannot talk about their health problem openly, with doctors because they exert their deprecation to their patients.
According to Nimmon & Stenfors-Hayas (2016), the power of biomedicine indicating an unequal relationship of doctor and patients. In biomedical system women, children, old people are less prestigious and the doctors get their dignity. According to Kleinman et al (1978), it is an important for a doctor to take a full concentration on patients' experiences. But most of the cases they do not take attention on patients need, realization and knowledge. Ember & Ember (2004) shows the unequal relationship of doctor and patients. In the observed hospital and clinics, the doctors and the surgeons are more prestigious for their exclusive knowledge and somehow, they used to less concentration on hearing the sickness explanation from the patients. Sometimes, they admonish to their patients. The observed women patients claimed that, they cannot explain their health problem to the doctor that affected their mental and physical heath equally. Amena Begum experiences given a proper understanding about the helplessness of the sick women of observed area. 50 years old Amena begum has done her surgical operation in her left leg. I observed when the doctor take a round and after seeing her, Amena Begum said that doctor Shaheb though I have taken medicine and injection properly the pain and anguish in my leg have not dispelled. The doctor answered with a rebuked and neglected that, "You have broken your legs at this old age how the pain will be vanished so easily?" She told me, "We cannot ask anything to the doctor anytime, if we do, he scolds us as he was a very busy man." It is observed, the doctor also scolds the other women like Amena Begum as she asked doctor how long she would have to be admitted there. Due to the high status of biomedicine and the biomedical professional makes a behavior to portray himself in a highly respectable manner. This picture is not only found in the urban are but also in rural area of Bangladesh that has become a very common matter in public and private health care sector.
---
The Mechanical Defects in Unlicensed Clinic
Women in the study area seek medical care from clinic and hospital for various ailments. When they do not get any benefit, they find out their condition are worsened due to the defective medical care. From my informant's I am informed and also observed, in the study area, there is a clinic that is built without legal authorization.
The doctors of the clinic have no certificates as they are not as registered doctor and none have proper medical education, experience and qualification. Rather, these doctors are quacks have political, financial relation with local police, administration and political leaders of these areas. With their help, they have established this clinic. In the initial time, the villagers did not know about this and they used to take medical care from them. Some of them were facing various health problems.
---
Case Study
Salma Begum was 32 years old who had her abdominal tumor operation about 3 weeks ago in that clinic. She told me about some strange experience of that clinic. She told me after the surgery severe burning pain started in her wound area. In the next day, around the 10 am, Salma begum and others who were admitted to the clinic were told, to leave the place at that time and requested to go to the roof of the clinic. When the patients and their relatives asked to the authorities about the reason they said, there was a little problem so they might leave that place. The surgery patients went up to the stairs of the roof with the help of their relatives with great difficulty, very slowly and much carefully. Later, they learned that the doctors at this clinic were allowed to see patients but not the authorized legislation to admit them, allowed to do medical surgery and operation (OT). At that time, the senior police officer came to visit the clinic and soon they are asked to leave the place. Salma begum's eyes filled with tears and she sobbing and said to me, "I have claimed the stairs on the roof with my wounded abdomen very difficult. My surgery was not done with experienced hand. I have a problem on my forehead. If I had known it before, I would not go there, done my operation by this unskilled hand." Though it was 25 days have passed since the surgery was taking and she had taken medicine properly but no changed was observed. She was broken hurt mentally. (Baer et al, 2003, p.30)
---
Biomedical Error and Technical Mistakes
Biomedicine is greatly relied on science and modern technology at the time of diagnosis (Woolf et al, 1999). Error may occur, all kinds of care process from diagnosis to administration of medicine. Medication error also found in clinical environments that may lead unnecessary diagnosis, various test, prolonged hospitalization and even death (Kozer et al, 2006, Paul et al, 2014). It is often observed that biomedicine gives some false report that affects women health in the studied area.
Rural women sometimes, face health risks due to the mechanical failure of biomedicine. I have a female informant who is 48 years old. She was a direct victim of the mechanical failure of biomedicine. She is a patient of high blood pressure and diabetes. That is why; she often goes to the doctor's chamber to check her diabetes and blood pressure because her diabetes and blood pressure are not under control. One day after measuring her diabetes, the doctor gave her anti-diabetic medication because it has risen again. She felt physically weak for 3 days after taking that medicine. Later when she went to the doctor again, the doctor said that since the machine was technically error yesterday, so the diabetes point was not measured correctly. Then, the doctor asked to take new medicine after measuring diabetes with a new machine. After taking that medicine she felt better. Another informant of mine, who is 40 years old, told me that during an operation (OT) in a clinic in Baganchra Bazar, the patient regained consciousness. At one point the patient started screaming. It happened because the doctor could not give the anesthetic medicine to the patient properly. I spoke to a nurse at the clinic to find out more about this. "The patient was not properly anesthetized that day because the anesthetist was not present on the operation table," she told me, withholding her name and identity. As a result, the woman regained consciousness during the tumor operation.
---
Mistakes in Ultrasound Report: Hastened the Killing of Fetus
Very intrinsically, women in our country are eager to know whether the baby in the womb is a baby boy or a baby girl. Ultrasound is a blessing of modern science and widely accepted accurate medical technique to access pregnancy. It also identifies various specific conditions, such as: age of fetus, possible miscarriage, number of fetuses, baby movement, fetal growth, sex identification etc. The observed village women who are pregnant know about their unborn children taking the blessings of biomedicine that introduced themselves to the new technology called as ultrasound which informs about their desired subject.
---
Case Study
The women, 41 years old, Rahima Sultana have 4 daughters and she had also pregnant for the fifth time then. She was much anxious about her unborn baby and went to a private clinic in the nearest bazaar, to identify the sex quality of fetus receiving the ultrasound technology of biomedicine. Observing the report, the physician assumed that, she would not have a baby boy but a girl. Rahima's husband said her, "If she had a daughter this time, he would divorce her." Finally, she consulted with the physician and decided to have an abortion very secretly. Then she convinced doctor very difficult and finally received an injection of abortion from the local health care. The next day, she gave birth to a stillborn son. It was a great tragedy, for herself when she realized that it was a baby boy. She told me, "I cannot tolerate my pain and sorrow as I am a mother whether it was a boy or a girl. I got itself in my womb, but my family urged me to do this abortion." (Fieldwork, 2017) Only because of her socio-economic facts and her family condition aroused herself to do it. Also due to technical error in ultrasound report, her life was at stake. She felt very weak due to heavy flow of blood for abortion. Besides she could not accept the grief the losing of her child. Besides, due to the terrible mistakes of biomedical treatment her family members told many bad comments to her.
---
Biomedicine is Preventive, Less Curative
According to Alison Gray, "biomedicine has failed to effectively control some disease like tuberculosis (TB), cancer and also unable to deals with social problem that causes the disease (Gray, 1996). Doctors prescribe different medicine but ultimately, the ailments of patients are not cured. A young girl, Ashura Khatun, the key informant of this research had been suffering from chronic cough and sour throat for long time. She went to the government hospital and receiving treatment from biomedical doctor and her condition was not changed. Then she goes to a private clinic of local market and got a checkup after costing about 1200Tk. But she did not get cure. After that, on the advice of her father, she took medicine from a homeopathy doctor and about 20 days later, she recovered completely. She told me that Homeopathy treatment and its medicine kill germ forever but the in the cases of allopathy, (biomedicine) when the medicine is taken to be stopped the cough returns because it cannot exhaust the germ of cough forever. This comment is simply similar to critical medical anthropological thought that claims that it is not always right that only the biomedicine is useful but alternative medicine and ethnomedicine are also beneficial, as the efficacy are multidimensional. (Ember and Ember, 2004)
---
Generational Differences of Taking biomedicine
A generational gap and some differences are observed in women of the researched area for the practice and acceptance of biomedicine to them. While biomedicine has become more acceptable to the younger generation of women, it has not become much popularity to the older women. Perhaps, the elder people (58-78 years old, especially women) do not like to seek treatment from biomedical doctors, unless the disease is serious. They realize, if one goes to the allopathy doctor, he/she must have to go. But the disease is cured before repeated visit to the homeopathy specialist. There are effective treatments available at low cost. But the costs of biomedicine are so high. The older women consider the caesarean delivery of a young woman is an unnecessary waste of money. The Local hospital and clinics are full of male doctors and they dominate the knowledge of biomedicine. There are only one, two and some fewer female doctors. So, the elder women express their reluctance to hospital treatment system. They are afraid of lost their veil and worry about sacred and sin as they motivated their life style according to Islam. Purda has not only an aspect of Muslim life, but it has become an essential acceptable part of social and cultural life. The Non-Muslim older women also show aversion to take treatment from male doctors. The social and religious practices of purda are more observable among the elders rather than Youngers. They do not like the intervention of technologies and male doctors on their body, again and again. In this regard, the critical medical anthropologist M. Lock properly said, "Medicalization is a patriarchal process in which, the intervention of technologies and trained male doctors are observed over the female body. (Ember & Ember, 2004, p.118).
---
The Exercise of Political Power in Hospital and Clinics
The internal environment of the hospital and clinics are not free from the influence of external political power, the political leader and the economically powerful individuals enjoy especial privilege in local hospital and clinics for their power. On the other hand, the politically and economically disadvantages patients struggle to get a seat, even in emergency, in local hospital and clinics. Other time, giving some money or baksheesh to hospital staff, one can get a seat. As a result, the sick women face health risk at the time of emergency. Three beds were occupied by politically and economically powerful male. Though he had one patient but he occupied more because he will stay there with his family members. On the other hand, the sick, young girl who belonged to middle class family, broken her leg in a sudden accident did not get any seat though she was writhing in pain. Later she was laid on a mat in the floor. This case is similar to critical medical anthropological perspective (CMA) that depicts how spacious political relationship affecting individual health (Baer et al, 2003, p.27) Case Study 22 years old woman had a tumor surgery about 5 days ago and she needed 8 stitches in her abdomen. The assistant doctor was dressing her wounded area and she was mourning loudly. Two nurses were standing herself, hold her two hands and she could not move. The professor doctor kept scolding to her because she was crying. She had a lot of pain due to infection in wound. In the bed, she had completed her natural work. Most of the time, she was in tensed that when she will be able to recover, to walk normally, and there were many questions in her head (Fieldwork, 2017). When the women body undergoes biomedical treatment for a disease, they are subjugated to biomedical knowledge, technology not once but repeatedly. Drugging in women's body, injecting them and undergoing surgery are the expression of biopower of biomedicine.
---
Conclusion
Biomedicine is an effective treatment system as it immediately protects human health from acute manifestation of disease. Most of the cases, it is considered as lifesaving medical care. The observed women, in researched area face various problems and challenges to take this lifesaving active medicine and medical care. Beside the local nature of biomedicine and biomedical care have some rigid and inflexible aspects; those are impacted on women health. If it were possible to reduce the additional cost of biomedicine, unequal relationship between doctors and patients, the absence of doctors in hospital, clinics and other complication like the baksheesh, socio-political aspect etc, then the women did not have to face the health risk. They could properly enter the practice of biomedicine. Even it is also important for biomedical practitioners to full concentrate on patient's experiences of sickness that will improve doctor-patient's relationship. At the time of diagnosis, doctors should emphasis not only on biology, pathology, science, medical knowledge and technology, but also on patient's mind. It will bear more fruitful result in biomedicine and biomedical treatment system. |
Places of worship (POW) have traditionally been argued to have crime-reducing effects in neighborhoods because of their ability to produce social capital. Yet, the evidence for this proposition is surprisingly weak. Consequently, an alternative proposition, rooted in environmental criminology, suggests that POW might unintentionally operate as crime generators in neighborhoods insofar as they induce foot traffic and undermine guardianship and social control capabilities. Because of these competing propositions in combination with the limited number of studies on this topic, we conduct a block group analysis of crime, places of worship, well-established criminogenic facilities, and sociodemographic characteristics in Washington, DC. We estimate negative binomial regression models of both violent and property crime and find strong evidence for only one of the propositions, with the effects of POW being relatively strong in comparison to other predictors in the models. The implications of these findings for criminology, urban studies, and public policy are discussed. | Introduction
Places of worship (POW) broadly refer to locations where people gather to practice some type of religion, such as Christianity, Hinduism, Islam, and Judaism, to name a few. In addition to religious socialization, research has shown that POW are important because they tend to facilitate social ties, mutual cohesion and trust, and a willingness to intervene for the common good [1][2][3][4][5], which In turn, can be used instrumentally to achieve a collective goal, namely minimizing crime in neighborhoods. The key implication is that social capital that originates within POW extends to other settings in the larger community [6][7][8][9], ultimately strengthening informal social control mechanisms, such as the dissemination of information, mobilization of resources, or an informal system of monitoring public spaces. This suggests that places of worship should have a crime-reducing association in neighborhoods [6,7,9,10], even after controlling for a range of factors known to be associated with aggregate crime outcomes.
Although there are many reasons to expect places of worship (POW) will reduce the amount of crime in neighborhoods [6,7,9], there is a dearth of empirical work to assess this proposition [7,11]. A principal reason for this mismatch between theory and data is that geospatial information on churches is generally lacking [7,11]. Because POW have tax-exempt status, they are not compelled to report to the Internal Revenue Service (IRS) or most municipalities, thereby it is a bedeviling challenge for researchers to accurately capture POW aggregated to microgeographic units. For example, even the National Center for Charitable Statistics (NCCS) does not have a directory or listing of all POW located across the United States (U.S.) [11]. Data issues aside, the few studies to have analyzed the effects of POW have provided weak evidence of their crime-reducing behavior [6,7,9,12]. This raises the possibility that POW might signify an unintended consequence for neighborhood crime control.
The crime and place literature has established the crime tends to spatially concentrate at or near risky facilities, including bars, liquor stores, check-cashing stores, retail outlets, restaurants, schools, and more [13][14][15][16][17]. This is because such facilities generally provide an opportunity structure for crime, that is, there is a tendency for motivated offenders and suitable targets to converge in space and time alongside weak guardianship [18][19][20]. More specifically, these facilities have deleterious effects because they operate as crime generators [19,21]. The latter is a foundational concept of environmental criminology and refers to the following: "particular areas to which large number of people are attracted for reasons unrelated to any particular level of criminal motivation they might have or to any particular crime they might end up committing [19:7]." Thus, the concept of a crime generator is predicated on high foot traffic (or what is referred to as a large ambient population) operating alongside emerging criminal opportunities. The implication is that potential offenders are not traveling to these areas with the specific intent to commit crime [18,19]. Rather, potential offenders will recognize situations in which there is a weakly guarded target, while going about their routine activities, and seize these opportunities to commit crime [13,19,20]. Analogous to how schools, restaurants, and retail stores have been linked to more crime in place [13,17,22,23], we propose that POW might unintentionally lead to neighborhood crime problems, mainly because of their ability to induce high foot traffic, which offers an abundance of targets, while at the same time undermining guardianship and social control capabilities [24,25].
There are competing expectations for how places of worship might impact crime in place. On the one hand, the literature rooted in social capital suggests that POW are crime-reducing because they engender social ties, common values and goals, and a responsibility for the collective good [1,3,4]. Conversely, POW might unintentionally be associated with more crime because they produce high foot traffic and undermine guardianship and social control capabilities, consistent with the environmental criminology literature [18,19]. Despite these competing expectations, there are few studies to empirically assess the POW-crime nexus in neighborhoods. Therefore, for the present study, we conduct a block group analysis of crime, places of worship, well-established criminogenic facilities, and sociodemographic characteristics in Washington, DC.
---
POW and social capital
A voluminous body of literature highlights how local institutions facilitate networks of effective social action in terms of crime control [7,11,[26][27][28]. Places of worship (POW) specifically have been argued to strengthen dimensions of social capital such as social ties, mutual cohesion and trust, and a willingness to intervene for the common good because of sponsored events and activities [2][3][4].
Consequently, POW can produce two types of social capital: 1) Bonding social capital builds intraneighborhood cohesion and social ties among adherents and residents, while 2) bridging social capital establishes interneighborhood cohesion and social ties among local institutions, municipal agencies, and other groups of people [4,8]. Both bonding and bridging social capital are theorized to strengthen neighborhoods' capacity for collective action against crime problems. So, when neighborhoods exhibit relatively higher levels of social capital (as a result of POW), there is a greater likelihood that residents and adherents will acknowledge crime problems, will achieve consensus on how to address these problems, and will solve the problems in a more collective fashion [2,4,8]. This leads to our first proposition:
P1. Places of worship will be associated with lower counts of both violent and property crime, controlling for a range of factors known to be associated with crime in neighborhoods.
The seminal work of Robert Putnam [4,29] posits that successful outcomes (in terms of education, health, crime control, family structure, etc.) are more likely in communities with high social capital, with local institutions like churches being the catalyst for the latter. Putnam's [4] analysis indeed reveals a negative effect of local institutions on crime in U.S. counties. Similarly, Lee (10) constructs an index of civic engagement, which includes an indicator of congregations, and finds that this index is negatively associated with crime in rural U.S. counties. In one of the more comprehensive examinations linking POW to crime in place, Beyerlein and Hipp (8) determine that three POW measures are largely associated with lower levels of murder, burglary, assault, and robbery in U.S. counties. Yet despite this evidence in support of proposition 1, there is also evidence to the contrary. For instance, one study failed to determine that POW are related to lower violent and property crime in New York (NY) block groups [7]. This study also failed to detect any conditional effects of their POW measure. Also, another study found that three church measures (i.e., church presence, total churches, and churches within 500 ft) had nonsignificant effects on informal social control in Louisville and Lexington (KY) block groups [9].
---
POW and criminal opportunities
Environmental criminology has argued and shown that crime is spatially concentrated in neighborhoods with a high density of nonresidential activities [30][31][32]. More specifically, activity nodes refer to locations where people spend a significant amount of time conducting nonresidential routine activities (e.g., work, school, grocery, recreation, shopping, etc.), while pathways are features of the planned physical environment that connect activity nodes with one another (e.g., a road network, monorail or train system, bus lines, walking trails, etc.). Neighborhoods with a higher concentration of nodes and pathways have indeed been shown to have a disproportionate amount of crime [13,16,31] and this is because nodes and pathways yield overlapping activity and awareness spaces of large numbers of people-a recipe for crime problems [18][19][20].
Environmental criminologists have increasingly used the term crime generator, an extension of the activity node concept, to denote physical structural qualities of neighborhoods that breed crime as a result of foot traffic and anonymity specifically [13,16,33]. Stores, restaurants, and schools commonly operate as crime generators because of their ability to increase the volume of targets and undermine guardianship capabilities, that is, the ability to informally monitor and regulate public spaces [16, 17, 22-24, 34, 35]. Offenders do not travel to these locations with the specific intent to commit crime [19], rather, the key implication is that amid high foot traffic, a potential offender will notice a weakly guarded target (person or object) and seize the opportunity to commit crime.
Given that foot traffic is a defining characteristic of crime generators, we argue that POW might operate in the latter capacity. In addition to religious meetings and services that occur on a weekly basis, many POW have a mission statement that involves the provision of need-based social services to its members, as well as less fortunate people in the larger community. These services include shelter and housing, food/soup kitchens, therapy and counseling, and job procurement and training [7,9,12]. When POW are effective in improving the social circumstances of people, these people may be less inclined to resort to criminal behavior [7,11], yet this remains an open question. On the other hand, high foot traffic because of the provision of need-based services will almost certainly induce criminal opportunities [6]. Analogous to how a shopping mall not only provides positive services and economic benefits but also provides criminal opportunities by increasing the presence of both potential offenders and targets [22], a place of worship has the ability to increase the number of potential offenders and targets in a neighborhood simply through the increased foot traffic (or ambient population) that results. We therefore evaluate a second proposition:
P2. Places of worship will be associated with higher counts of both violent and property crime, controlling for a range of factors known to be associated with crime in neighborhoods.
Two studies reveal that places of worship might unintentionally lead to crime problems in neighborhoods. Triplett, White (6) determine that neighborhood variation in street crimes and domestic assaults, respectively, are positively associated with the number of churches in Norfolk (VA) block groups. Desmond, Kikuchi (12) examine how different types of religious congregations are linked to crime in Indianapolis (IN) block groups. The authors determine that their congregation measures mainly have nonsignificant effects on violence whereas for property crime, most of these measures yield significant positive effects. Conversely, there are no instances in which the density of congregations shows a crime-reducing association (except for civically engaged organizations). However, both these studies do not control for the presence of well-established criminogenic facilities (e.g., bars, liquor stores, check-cashing stores, retail outlets), and therefore the observed criminogenic effects of POW might be attributed to the latter facilities.
In the sections that follow, we explain our data and analytic strategy used to test our propositions (i.e., P1 and P2). After reporting the findings, we provide a discussion of the implications for criminological theory, urban studies, public policy, and future research.
---
Data and methods
---
Study area
The study area is Washington, DC, the Capital of the United States. DC is an urban area with 670,050 persons according to the most recent estimate by the United States Census Bureau (https://www.census.gov/quickfacts/fact/table/DC,US/PST045221). It follows that DC is one of the most populous areas within the United States, specifically, ranking 23 rd among U.S. cities. DC also has an ethnically diverse population; 37.3% of residents self-identify as white, 45.8% as Black, 11.5% as Latino, and 4.5% as Asian. Although the median household income is significantly more than the national average ($90,842 versus $64,994), such disparity is explained by DC's high cost of living, and therefore it is not surprising to observe that DC's poverty rate exceeds that of the national average (16.5% versus 11.6%).
Washington, DC, offers a favorable setting for examining the effects of places of worship (POW) on crime in place for several reasons. First, DC has recently (and historically) exhibited rather high levels of both violent and property crime among the 100 most populous U.S. cities. In 2019, for example, DC ranked 24 th and 29 th in violent and property crime rates, respectively (https://www.fbi.gov/how-we-can-help-you/need-an-fbi-service-or-more-information/ucr/ publications). Therefore, there is a need to determine the factors that affect the spatial distribution of crime problems in DC. This leads to a second reason: Washington, DC, has a large presence of facilities (e.g., alcohol stores, retail districts/centers, check-cashing stores, and metro stations) which have been theorized or shown to be associated with more crime in place [13,16,19]. The presence of various criminogenic facilities has been provided through DC's open data portal (https://opendata.dc.gov/), and therefore this provides the necessary means to minimize the possibility of obtaining spurious effects of our key independent variable (i.e., places of worship) on crime. Furthermore, DC has a large presence of POW (N = 742) with the corresponding longitude (x) latitude (y) data made publicly available through the portal. Notably, previous researchers have documented the many challenges of collecting geospatial data on POW [7,11,26,36]; most notably, there is not a census of POW in the U.S. because most of them are tax-exempt and therefore they are not accurately captured by data provided by the National Center for Charitable Statistics (NCCS). By using DC data on POW in combination with their large presence, it affords us the flexibility to test for a multitude of main and moderating effects.
---
Units of analysis and sample
The U.S. Census Bureau provides data on various geographic/spatial units. We draw on block groups specifically to link places of worship to crime in place, primarily because block groups have been designed to be homogenous on a range of sociodemographic characteristics, including income and poverty, educational attainment, household structure, age, and length of residence [37,38]. Thus, our selection of block groups as our units of analysis is consistent with previous empirical work that has examined "neighborhood effects" on a range of outcomes, such as ethnic and racial segregation [e.g. 39], social networks [e.g. 40], walkability and health [41], gentrification [e.g. 42], and crime [43], to name a few.
The present study involves secondary data analysis of block groups from publicly existing data, and therefore did not require institutional review board approval. For our analysis, we estimate crime models using a sample of 449 block groups (out of the 450 in DC); one block group has been dropped because it is missing necessary information from the U.S. Census American Community Survey (ACS). We cannot use constituent tract information as a substitute for missing block group information, because for this block group, the tract and block group boundaries are exactly the same. We suspect missing data for some variables is attributed to the fact that this area largely encompasses Georgetown University and its affiliated facilities.
---
Dependent variables: Crime counts
We have collected crime data from DC's open data portal. These are official crime data coded and reported by the District of Columbia Metropolitan Police Department (MPD). MPD has provided crucial information on each crime incident from 2021, including the longitude-latitude coordinates of the crime, the date in which the crime occurred, and the type of Part 1 crime committed according to the Uniform Crime Reporting (UCR) program in the United States. Accordingly, we aggregated these data to their constituent block group and computed the number of incidents for the following crime types: murders, robberies, aggravated assaults with a gun, burglaries, larcenies, and motor vehicle thefts. Furthermore, we created an index of violent crimes (combining murders, robberies, and assaults) along with an index of property crimes (combining burglaries, larcenies, and motor thefts). Notably, our main models utilize the latter two indices as outcome measures, whereas some of the ancillary models assess each of the crime types (separately) that comprise both indices.
---
Independent variables: Places of worship, criminogenic facilities, and sociodemographic characteristics
DC's open data portal provides information on places of worship in 2019, most notably, the longitude-latitude coordinates of each POW. We identified 742 POW in the dataset after eliminating 32 cases with coordinates outside of the study area or with duplicate coordinates. Although some prior studies have theorized that the effects of POW differ by the religion or denomination of POW [8,9,44], DC only classifies its POW by seven religions, of which 97% of them are determined to be Christian. Denomination information was not provided for places of worship. We also determined that the name of POW was not sufficient for accurately classifying POW into denominations. Thus, we have created an index of the number of all places of worship, aggregated to block groups.
One of the most enduring correlates of spatial crime patterns is the presence of facilities that provide an opportunity structure for offenders, targets, and weak guardianship to converge in space and time [18,19]. Accordingly, we constructed several variables to capture such facilities. These facilities include the number of onsite alcohol outlets (i.e., bars, night clubs, and taverns), offsite alcohol outlets (i.e., liquor stores and convenience stores), check-cashing stores, and retail districts/centers (e.g., shopping malls and plazas). We also include a dichotomous variable for the presence of a DC metro station (1 = Yes and 0 = No).
It is also necessary to control for sociodemographic characteristics that have been linked to the spatial distribution of crime [25,[45][46][47]. Drawing on data from the U.S. Census Bureau we create measures of various sociodemographic characteristics. In particular, we utilize the American Community Survey (ACS) five-year estimates from 2015 to 2019, aggregated to block groups. To capture differences in economic hardship, we account for poverty (%) in block groups. We computed a Herfindahl index of five ethnic groups (white, Black, Latino, Asian, and other races) to account for the ethnic heterogeneity of block groups. The concentration of both Black (%) and Latino (%) residents are also included to account for populations that have been historically marginalized by the political economy of place [48]. Furthermore, we employ a variable of homeowners (%) as a proxy for residential stability and we control for two types of housing characteristics: the number of housing units (/100) and occupied units (%). Finally, we created a variable of the population (/100) along with a variable that specifically captures the age group with the highest rate of offending and victimization, that is, persons aged 15 to 29 (%). Descriptive statistics for all measures are shown in Table 1.
---
Analytic strategy
The dependent variables of crime counts are significantly skewed and overdispersed (i.e., the variance exceeds the mean). Thus, we analyze the spatial distribution of crime using negative binomial regression in Stata 17; a Poisson-based regression that effectively accounts for overdispersion via its alpha parameter [49,50]. While the Poisson distribution can be appropriately used to model certain count variables, for the present study, we find that Stata's likelihoodratio test, which tests the null hypothesis that the dispersion parameter (alpha) is equal to zero, is significant for all our models (p < 0.05). Negative binomial regression is therefore needed to account for overdispersion [49,50]. Yet at the same time, we acknowledge that ordinary least squares regression (OLS) is a viable alternative for modeling the spatial distribution of crime, especially given that a large majority of block groups do not have zero crime incidents, and therefore we have estimated ancillary models using OLS. To be clear, we are concerned with using the appropriate model(s) to analyze the spatial distribution of crime, however, it should be noted that negative binomial and OLS regression models are not representative of spatial regression. Therefore, we estimate ancillary spatial error models as a final robustness check (described in more detail below).
Geographic units such as block groups are not islands unto themselves [51], in fact, the conditions of spatially contiguous/adjacent units can very well shape what occurs in the focal unit -what is often referred to as a spillover effect. What this means for the current study is that crime in the focal block group is likely impacted by the amount of crime in nearby block groups [52][53][54]. To account for this spatial dependence, we constructed a spatially lagged measure for each crime outcome using GeoDa software with first-order queen contiguity. Such a measure captures the average number of crime incidents among contiguous block groups in relation to the focal block group. We include a spatially lagged measure of crime (as a predictor) in our full models.
A general expression of the (full) negative binomial regression models that we estimate is as follows:
y ¼ B 1 POW þ B 2 SD þ B 3 CF þ B 4 SLy þ a;ð1Þ
where y is the number of crime incidents, POW is the number of places of worship, SD is a matrix of the sociodemographic characteristic measures, CF is a matrix of the criminogenic facility measures, SLy is the average number of crime incidents in block groups adjacent to the focal block group (a spatially lagged measure), and α is an intercept. While one approach for modeling crime across geographic units is to specify the population count as an exposure term (thereby estimating the outcome as a crime rate), we have instead modeled crime counts by including the population count as a predictor, given growing concerns over population count being the denominator of a calculated crime rate [e.g., see 18, [55][56][57][58]. As anticipated, we detected minimal evidence of spatial autocorrelation in our full models as a result of including the spatially lagged measure of crime. Although the Moran's I value was statistically significant in all instances, the maximum value was .08 (which is rather weak given that positive spatial autocorrelation ranges from 0 to 1). Furthermore, we assessed and found no evidence of multicollinearity issues based on variance inflation factors (VIF). The maximum VIF was 4.68, which does not exceed the commonly used cutoff of 10 [59,60].
In the results section, we present two models for both the violent and property crime outcomes (Table 2). We first estimate a baseline model that features our places of worship measure along with the sociodemographic characteristic measures, consistent with the modeling approach undertaken by certain prior studies [for example, see 6,8,12]. We then estimate a full model that additionally includes the measures of well-established criminogenic facilities and the spatially lagged measure of crime in order to determine the extent to which places of worship maintains a significant effect on crime (if at all). Crime and place researchers have called for analyses to integrate measures associated with social disorganization and routine activities theories simultaneously [for example see, 33,61]; therefore, our full model is consistent with this call.
In addition to discussing the observed effects in terms of their direction and statistical significance, we highlight the magnitude of these effects in relation to one another. We draw on an approach that determines the percent change in the expected crime count for a one standard deviation increase in the variable of interest using the following formula: (exp(β× SD)-1) � 100. This is a preferred approach because some of our independent variables drastically differ in terms of their scales [49: 492-493, 514-516.], most notably, the POW and facility measures are counts whereas the sociodemographic characteristic measures are percentages. Similar to previous crime and place studies [62][63][64][65] we utilize this approach to effectively compare the effect sizes of variables with substantively different scales.
On the other hand, we recognize that another common approach is to assess the magnitude of the effects using incident rate ratios (IRR). Specifically, an IRR denotes the percent increase or decrease for every one-unit increase in a predictor by multiplying the difference between the IRR and one by 100 where positive values yield a percent increase and negative values yield a percent decrease [49]. In Table 3, we compute the effect sizes using both approaches, although we base our inferences on the first approach because for the second approach, a one-unit increase may represent a very large increase for one predictor (e.g., DC Metro Station) and a very small increase for another predictor (e.g., population).
---
Results
---
Violent crimes
Model 1, a baseline model, shows that places of worship (POW) is significantly and positively associated with violent crime counts, controlling for sociodemographic characteristics that are commonly accounted for by crime and place researchers [45]. A 1 standard deviation (SD) increase in places of worship implies a 27.6% increase in the expected number of violent crimes based on the following formula: (exp(β× SD)-1) � 100.
We also find that ethnic heterogeneity is positively associated with violent crime, albeit at the marginally significant threshold (p < .10), while the percentage of Black residents also shows a positive association with the outcome. The percentage of homeowners is significantly and negatively associated with violent crime, whereas the number of housing units exhibits the opposite relationship. This baseline model suggests that POW might have an unintended consequence, consistent with proposition 2.
To minimize the possibility of obtaining spurious effects of POW, it is necessary to account for physical structural qualities of neighborhoods that have been linked to crime. In model 2, we therefore include measures of various facilities as well as the spatially lagged measure of violent crime counts. Although ethnic heterogeneity is found to be no longer related to violent crime, the positive relationship between the percentage of Black residents and violent crime persists. Moreover, the negative relationship between homeownership and violent crime remains as well. On the other hand, the positive effect of housing units becomes nonsignificant, whereas population is now related to more violent crime (albeit at the marginally significant threshold). Most of the facility measures indicate significant relationships with violent crime in the hypothesized direction, that is, violent crime in block groups is positively linked to the presence of alcohol outlets (both onsite and offsite establishments), check-cashing stores, and the presence of DC metro stations, respectively. We also determine a positive relationship between the spatially lagged measure and the outcome. What this means is that violent crime in nearby block groups is related to higher numbers of violent crime in the focal block group. Similar to model 1, we observe that POW maintains its significant and criminogenic effect in block groups. Yet, the size of the coefficient from model 1 to model 2 has decreased by more than half (from .121 to .059). Thus, the effect of POW on violent crime is much weaker as well: There is a 12.6% increase in the number of violent crimes for a 1 SD increase in POW. Although the inclusion of the facility and spatial lag measures naturally reduces the POW effect in terms of magnitude, the latter has one of the strongest effects among the variables in the full model (see Table 3). The magnitude of the effect of POW, specifically, rivals or even outpaces those of on-premise alcohol outlets, off-premise alcohol outlets, check-cashing stores, and the presence of a DC metro station. All of this is to say is that the full model of violent crime reinforces the notion that POW appear to operate as crime generators (not social capital organizations).
---
Property crimes
Model 3, a baseline model, reveals that places of worship is significantly and positively related to higher numbers of property crime in block groups, controlling for sociodemographic characteristics that have been linked to crime in place. The magnitude of this effect is quite strong: Notes: SD Effect size refers to the percent change (%) in the expected crime count using the formula [exp(β X SD)-1]100, on an all other things equal basis. IRR refers to the incident rate ratio effect size.
Empty cells denote non-significant effects.
https://doi.org/10.1371/journal.pone.0282196.t003
A 1 SD increase in places of worship implies a 31.4% increase in the expected number of property crimes, consistent with proposition 2. While most of the sociodemographic characteristic measures have nonsignificant effects, we do find some instances in which they do indeed affect the spatial distribution of property crimes. For instance, we detect positive effects for ethnic heterogeneity, number of housing units, and the percent aged 15 to 29. On the other hand, percent homeowners exhibits a (marginally significant) negative effect on property crime.
In model 4, we additionally include the facility measures along with the spatially lagged measure of property crime. This provides a conservative test of the effect of POW on property crime in block groups. We find that the effects of ethnic heterogeneity, the percent homeowners, and the percent aged 15 to 29 are no longer significant in the full model. Conversely, the number of housing units maintains its positive effect, while the population measure now shows a marginally positive association with the outcome. Consistent with environmental criminology [18,19], each of the facility measures have a positive effect on property crime (except for the presence of retail districts/centers). We also find evidence of property crime being spatially clustered, as the spatially lagged measure indicates a positive effect.
The coefficient estimate for POW has decreased by more than half from the baseline to the full model (from .136 to .066), which is similar to what we observed for violent crime. Nonetheless, the criminogenic effect of POW remains strong; there is a 14.2% increase in the number of property crimes for a 1 SD increase in POW. Moreover, the magnitude of this effect is stronger than all the other predictors in the model (Table 3) with the exception of housing units and the spatially lagged measure of property crime. Because POW have strong positive effects on both violent and property crimes, this suggests that POW may need to be reconceptualized as a crime generator rather than a source of social capital that is leveraged to reduce crime.
---
Ancillary results
An alternative analytic strategy is to model the violent and property outcomes using ordinary least squares regression (OLS) instead of negative binomial regression. Thus, we have estimated OLS models of violent and property crimes using the same model specifications as those implemented for negative binomial regression (i.e., Table 2, M1-M4). Table 4 shows the results of the OLS models, and it is apparent that the pattern of results is very similar to those produced by negative binomial regression. Although the criminogenic effect of POW weakens in magnitude between the baseline and full model for both violent and property crimes, it nonetheless remains statistically significant and crime-producing. Also, the measures capturing sociodemographic characteristics and well-established criminogenic facilities yield effects that are virtually the same as those produced by negative binomial regression (in terms of both direction and statistical significance).
Another line of inquiry that is necessary to investigate is whether POW affect certain crime types differently. We therefore estimated separate models for each crime type that comprises the violent and property crime indices. From Table 5, we determine that POW is significantly and positively associated with the number of robberies (14.5% more for a 1 SD increase), burglaries (12.2% more for a 1 SD increase), larcenies (13.8% more for a 1 SD increase), and motor vehicle thefts (15.3% more for a 1 SD increase), while demonstrating a marginally significant association with murders in the same direction (18.5% more for a 1 SD increase). So, aggravated assault is the lone instance by which POW fails to have a statistically significant relationship with a form of crime. Finally, we did test for moderating effects between certain independent variables (e.g., POW X poverty and POW X percent Black), but in all instances these interaction terms were found to be nonsignificant.
A final consideration is to determine whether the pattern of results remains unchanged when estimating spatial regression models, given that negative binomial regression and OLS regression do not explicitly account for spatial autocorrelation. We therefore estimate a spatial error model for both the violent and property outcomes, thereby mimicking the full model specifications illustrated in Table 2. Specifically, we draw on GeoDa software to conduct maximum likelihood estimation of spatial error models that include a spatial autoregressive error term. Table 6 shows the results of the two spatial error models, and it is apparent that the pattern of results is very similar to those produced by negative binomial regression and OLS regression. Not only is POW significantly and positively related to violent and property crimes, but also the Moran's I of the spatial error residuals are very close to zero. This means that including the spatially autoregressive error term has in effect removed all the spatial autocorrelation from these models.
---
Discussion
Physical structural qualities of neighborhoods have been argued to have consequences for crime [2,18,20], yet there is a dearth of research investigating how places of worship shape spatial crime patterns [6,9]. What's more, there are competing theoretical arguments for how POW would impact crime in neighborhoods, with social capital and environmental criminology perspectives arguing that POW have negative and positive associations, respectively. Accordingly, we examined the spatial distribution of violent and property crime in Washington (DC) block groups as a function of places of worship, well-established criminogenic facilities, and sociodemographic characteristics. We highlight three key findings. The first key finding was that we consistently found POW to be associated with more violent and property crime, consistent with the results of two previous studies [6,12]. This coupled with the fact we failed to detect a single instance of POW having a significant and crimereducing effect suggests that POW do, in fact, operate as crime generators in neighborhoods. That is, the presence of outsiders lessens familiarity and makes it more challenging for insiders (i.e., members and residents) to identify potential offenders and detect suspicious, crimerelated activity [18,19,24]. For insiders, there may be ambiguity concerning whom they should direct territorial behavior at, but also, the situational contexts by which it is acceptable to do so [24,66,67]. Recent studies have drawn on social media, cell phone, and transportation data to measure various properties of the ambient population and have indeed determined that the size of the ambient population is associated with more crime in place [56,58,[68][69][70]. Thus, we propose that a significant increase in the volume of potential offenders and targets is enough to disrupt (or completely negate) a process whereby social capital is formed in POW and later used to instrumentally prevent and solve crime problems.
The second key finding was that places of worship exerted strong criminogenic effects, even after controlling for well-established criminogenic facilities and sociodemographic characteristics. We found that the effects of POW were stronger in magnitude than most of the other predictors in the violent and property crime models (Table 3), including alcohol outlets with onsite consumption, check-cashing stores, and the presence of a DC Metro station. Moreover, ethnic heterogeneity and poverty, two predictors often equated with social disorganization [45,46], were not significantly related to more crime. Because POW are important local institutions insofar as they promote education, steady employment, marriage, drug and substance avoidance, and friendships among members, our findings should not be interpreted as an indictment on religion or POW. Rather, it highlights POW as an (unexpected) ecological risk factor for neighborhood crime, similar to how shopping malls, central business districts, restaurants, and retail stores have been deemed to operate as crime generators [13,16,22,35]. Our results have implications for both researchers and policymakers. When modeling crime across geographic units, crime and place researchers importantly control for factors that induce criminal opportunities (i.e., liquor stores, bars, check-cashing stores, and transit stops). We suggest additionally controlling for POW to minimize the possibility of obtaining spurious effects with regards to the independent variables of interest. Relatedly, for crime policy, we encourage researchers and city officials to account for the presence of POW in determining the risk of crime across areas within a city. This could be accomplished via two data driven approaches: 1) combine regression analysis and the mapping of predicted outcome variables [e.g., see 71], and 2) implement risk terrain modeling [e.g., see 72]. Many policing strategies and intervention efforts are predicated on identifying areas with a disproportionate amount of crime; therefore, the incorporation of POW may provide more accurate profiles on which areas would benefit the most from increased (fair and consistent) policing, or municipal resources, services, and partnerships.
---
Limitations and directions for future research
Although the current study provides crucial insight into places of worship and crime in neighborhoods, we acknowledge certain limitations and directions for future research. First, because of data limitations we were unable to test the theorized mechanisms that may link POW to violent and property crime in neighborhoods (although this is true of nearly all prior studies on the topic). Thus, we encourage future research to collect neighborhood level data on social capital, civic engagement, foot traffic (or the ambient population), and anonymity in order to test whether these factors do, in fact, mediate the effects of POW on crime. Second, our key independent variable captures the presence of places of worship, and therefore it does not capture the differential capacity of POW to impact crime. A natural extension is for future studies to assess variation in neighborhood crime as a function of more fine-grained characteristics of POW, such as the number of adherents/members, employees, income/donations, and years of operation. A few studies have explored this line of inquiry [6,11,27], yet it remains to be seen whether measures that account for the differential capacity of POW provide additional knowledge beyond what can be gained from the standard measurement approach (i.e., the number of POW in a neighborhood). Also, it is an open question the extent to which DC's data on POW is exhaustive and accurate, though we have no reason to suspect that these data are any less valid than other POW sources (e.g., Yelp, the phonebook, Google, etc.). And, there is no reason to think that any missing data is not random or systematic. Third, the analysis is crosssectional; and therefore, it is unable to test how changes in the number of POW influence changes in the number of crimes. While this does not repudiate the findings of the present study, as understanding the spatial distribution of crime at a single timepoint offers an important baseline, future research may want to perform a longitudinal analysis using a fixed-effects approach. Finally, the analysis and findings pertain to neighborhoods of a single city. It is therefore possible that the observed effects might operate differently across U.S. cities. Accordingly, future work needs to examine potential relationships between places of worship and crime across a diversity of ecological settings, including cities beyond the United States.
---
The data on crime, places of worship, and well-established criminogenic facilities can be found on Open DC (https://opendata.dc.gov/). The data on sociodemographic characteristics can be found on the United States Census Bureau website (https:// www.census.gov/programs-surveys/acs.html).
---
Author Contributions
Conceptualization: James C. Wo. |
This study explores the level and frequency of anxiety about COVID-19 infection in some Middle Eastern countries, and differences in this anxiety by country, gender, workplace, and social status. Another aim was to identify the predictive power of anxiety about COVID-19 infection, daily smartphone use hours, and age in smartphone addiction. The participants were 651 males and females from Jordan, Saudi Arabia, the United Arab Emirates, and Egypt. The participants' ages ranged between 18 and 73 years (M 33.36, SD = 10.69). A questionnaire developed by the authors was used to examine anxiety about COVID-19 infection. Furthermore, the Italian Smartphone Addiction Inventory was used after being translated, adapted, and validated for the purposes of the present study. The results revealed that the percentages of participants with high, average, and low anxiety about COVID-19 infection were 10.3%, 37.3%, and 52.4%, respectively. The mean scores of anxiety about COVID-19 infection in the four countries were average: Egypt (M = 2.655), Saudi Arabia (M = 2.458), the United Arab Emirates (M = 2.413), and Jordan (M = 2.336). Significant differences in anxiety about COVID-19 infection were found between Egypt and Jordan, in favor of Egypt. Significant gender differences were found in favor of females in the Jordanian and Egyptian samples, and in favor of males in the Emirati sample. No significant differences were found regarding workplace and social status. The results also revealed a significant positive relationship between anxiety about COVID-19 infection, daily smartphone use hours, and age on the one hand, and smartphone addiction on the other. The strongest predictor of smartphone addiction was anxiety about COVID-19 infection, followed by daily use hours. Age did not significantly contribute to the prediction of smartphone addiction. The study findings shed light on the psychological health and cognitive aspects of anxiety about COVID-19 infection and its relation to smartphone addiction. | Introduction
Epidemics are considered to be a prominent source of psychological and social disorders, e.g., fear, anxiety, and reluctance to communicate with others [1]. It is common during epidemics and pandemics for people to suffer from stress and anxiety, including the fear of infection and death, avoiding receiving medical treatment at health facilities, fearing the loss of relatives, and fearing isolation because of quarantine, which causes boredom, loneliness, and depression [2]. There is a psychoneurotic connection between acute inflammations of the respiratory system and psychological disorders, as occurred with SARS decades ago. People in quarantine suffer from boredom, anger, and loneliness. Symptoms such as cough and fever can increase anxiety, intrusive thoughts, and the fear of COVID-19 infection [3]. The world is currently experiencing the COVID-19 pandemic that spread to all countries in a short amount of time. The WHO declared the novel coronavirus outbreak a pandemic in March 2020, and predicted it to spread to all countries, urging countries to take the necessary steps to control it [4].
The number of people who have had a coronavirus infection is in the millions. It is, therefore, a threat that invokes anxiety, depression, and indignation in people. To protect themselves, people now adhere to social distancing so as to not catch the infection from close contact with others [5]. Being at home all the time can affect the mental health of both children and adults. Children and adolescents have therefore been advised to focus on home activities to forget about the negative effects of the coronavirus [6].
Middle Eastern countries have also been affected by the pandemic. By 7 July 2020, 214,000 confirmed coronavirus cases and 1968 deaths had occurred in Saudi Arabia. In the United Arab Emirates, 520,068 infected cases and 324 deaths were reported. In Egypt, 760,222 infected cases and 30,422 deaths were reported. A total of 1167 infected cases and 10 deaths were reported in Jordan [7]. Several studies have reported on the negative effects of epidemics and pandemics on the psychology of infected people and their caregivers [8,9]. Those studies reported a high level of psychological stress among people providing care to infected cases. In many studies, people with acute respiratory syndromes (Ebola, MERS, and SARS) were reported to have psychological disorders such as anxiety, depression, and other forms of mental illness [10][11][12]. In the Saudi context, in terms of the effect of MERS on psychological stress, female students had a higher level of psychological stress than that of male students [13]. In the Omani and Bahraini contexts, the coronavirusinduced anxiety among families was average, and no significant differences were found by country. However, there was a significant difference in favor of females, people aged over 40 years, people with lower educational levels, and unemployed people. Retired people were reported to experience the lowest level of anxiety [14].
In terms of the psychological impact, depression, anxiety, and stress at the beginning of the coronavirus pandemic in a sample of 1210 participants from 194 Chinese cities, 53.8% of the participants suffered an acute psychological impact because of the pandemic, whereas about 28.8% of the participants were found to suffer from average to acute anxiety [15]. In a study conducted in Italy, the percentage of people having high and severe coronavirusrelated anxiety ranged between 2.89% and 7.43% [16]. The WHO also asserted that some populations, such as people working in health and security, had infection fears and suffered from stress due to dealing with infected people, work pressure, and changed sleep and eating routines. Ministers and leaders in authorities confronting the pandemic suffer from similar psychological effects [17]. Two surveys were also conducted by the British Academy of Medical Sciences via the Internet. The results of the first survey revealed that the majority of the sample suffered from problems with mental health. Participants reported fears about their health and access to support and services during the pandemic. The second survey reported anxiety among participants about social isolation and economic difficulties resulting from the pandemic. With expectations of the increased occurrence of anxiety and stress during the pandemic, researchers expect an increase in the number of depressed people and people who are prone to commit suicide. In 2003, during the SARS epidemic, the rate of suicide in people over 65 years witnessed a 30% increase. Researchers asserted that actions taken at that time to eliminate the spread of SARS had serious effects on people's mental health, as unemployment rates and feelings of financial insecurity and poverty increased [18].
Research results concerning gender differences in epidemic-related anxiety are inconsistent, with some studies reporting higher levels of anxiety among females [15,16,19,20] and others reporting higher levels of anxiety among males [21]. Some studies have reported differences in anxiety about the future by gender in favor of females, and by social status in favor of the unmarried. No differences were found by profession [22][23][24]. Low-to-average anxiety levels were found between participants. The frequency of mild, average, and severe anxiety among participants was 7.7-78.8%, 5.6%, and 2.7-5.2%, respectively. The study did not find gender differences in depression and anxiety. On the other hand, it found differences in favor of the unmarried [25]. The authors in [15] found higher levels of anxiety among students than those among employeesworking personnel. The level of anxiety did not correlate with social status, the size of the family, or age. Similarly, social status, having no children, and workplace did not significantly contribute to anxiety or depression [16].
Smartphone use has been globally widespread during the coronavirus pandemic, which has induced feelings of isolation, social distancing, and a need for leisure, recreation, and shopping [26]. With this intensive use, smartphone addiction has become a universal concern [27]. It is a recent phenomenon in human behavior that can adversely affect the mental health and social functioning of people who overuse smartphones [28]. Smartphone addiction is the overuse or compulsive use of smartphones, resulting in negative consequences in social, behavioral, and emotional functioning [29]. It is a form of behavioral addiction that makes the individual unable to control the strong desire to use the smartphone and its applications, with the loss of productivity, the denial of negative effects, preoccupation, and feelings of annoyance and even panic when deprived of the smartphone [30]. Some studies have shown a connection between smartphone addiction and psychological adjustment problems, e.g., anxiety and depression. A Korean study found that smartphone addiction can be predicted by depression [31]. A similar finding was also reached in a Chinese study [32], where loneliness, which relates to depression, was found to be a strong predictor of smartphone addiction. In an American study, social interaction anxiety was found to predict smartphone addiction [33]. Another study found a positive correlation between anxiety and depression and smartphone overuse [34]. Smartphone addiction could be predicted by anxiety and depression. Anxiety as a major symptom of smartphone addiction emerges once the person is deprived of their smartphone [35,36]. This shows that the smartphone itself is a source of anxiety [37]. Smartphone overuse is a factor leading to mental health problems. They also found that gender was the strongest predictor of depression. Symptoms of anxiety were more frequent in younger people [38]. A positive correlation between smartphone addiction and psychological stress was also found. Research also revealed a weak relationship between age and hours of use on the one hand, and smartphone addiction on the other [39].
More than one study did not find a correlation between age and smartphone addiction [40,41]. Meanwhile, a positive correlation was found between daily use hours and the problematic use of smartphones [42]. However, differences in smartphone addiction in favor of individuals using a smartphone for more than four hours a day were found [43]. This same finding was reported by Haug, who reported a correlation between smartphone addiction and daily use hours [44]. Facebook addiction and state anxiety could be predicted by an increased use rate. The interaction of gender and trait anxiety predicted Facebook addiction [45].
As mentioned above, and the increase in cases of COVID-19 infection around the world in general and in Middle Eastern countries in particular can lead to increased levels of anxiety with negative behavioral effects, such as smartphone addiction. The present study aimed to identify the level and frequency of anxiety about COVID-19 infection in some Middle Eastern countries, and differences in this anxiety by country, gender, workplace, and social status. The study also aimed to identify the predictive power of anxiety about COVID-19 infection variables, daily smartphone use hours, and age in smartphone addiction.
---
Method 2.1. Participants
This study comprised a total of 651 participants (222 males and 429 females representing 34.1% and 65.9%, respectively) from four Middle Eastern countries: Jordan (n = 271, 41.6%), Saudi Arabia (n = 179, 27.5%), the United Arab Emirates (n = 108, 16.6%), and Egypt (n = 93, 14.3%). Their age ranged between 18 and 73 years (M = 33.35, SD = 10.69). Of the 651 participants, 246 (37.7%) were single, 378 (58.1%) were married, and 27 (4.22%) were divorced. The number of participants working for the government, the private sector, and students were 242 (37.2%), 243 (37.3%), and 166 (25.5%), respectively.
---
Instruments 2.2.1. Anxiety about COVID-19 Infection Scale
The authors developed a scale to measure anxiety about COVID-19 infection. To develop the scale, the authors surveyed scales in the relevant literature, e.g., the State-Trait Anxiety Inventory [46] and scales of social anxiety and general anxiety [15,[47][48][49][50][51]. The authors also used anxiety indicators, including the WHO's reports about prevention and the health guidelines for dealing with the virus. The scale had 40 items with 5-point Likert scales ranging from 5-'to a very high degree' to 1-'to a very low degree'. The preliminary version of the scale was face-validated by five professors who specialized in psychology, measurement, and evaluation. They were asked to judge if items represented the measured trait, and if the wording of items was sound and clear. This resulted in modifying some items, but no deletions were made.
Correlations among items and the total score were computed. These ranged from 0.628 to 0.842, all of which were high and statistically significant. The unilaterality of the scale was established by factor analysis. The results revealed that all items were significantly loaded on the first factor. The first eigenvalue was 22.025, and the second eigenvalue was 2.345. The explained variance of the first factor was 55.63%. This is consistent with Rechase's [52] suggestion that the unilaterality condition is met if the first factor can explain at least 20% of total variance. The reliability of the scale was then checked by computing the alpha Cronbach coefficient of participant scores. The scale yielded an alpha coefficient of 0.978, which indicates that the scale was highly reliable.
Participant scores on the scale ranged between 40 and 200. Scores were categorized by range into high anxiety (146.8-200) with a weighed mean ranging from 3.67 to 5, average anxiety (93.4-146.7) with a weighed mean ranging from 2.34 to 3.66, and low anxiety (40-93.3) with a weighed mean ranging from 1 to 2.33.
---
Smartphone Addiction Inventory
After surveying the literature on smartphone addiction and the instruments used in relevant studies, we used the Smartphone Addiction Inventory (SPAI) that was used in the studies of Pavia, Cavani, Blasi, and Giordano [53] and Lin et al. [54]. It is an inventory developed on the basis of the Chinese Internet Addiction inventory (CIAS) [55]. Items of this inventory assess several dimensions of smartphone addiction: compulsory use, withdrawal, tolerance, and problems in relationships with others, and time and health management. The reliability examination of the inventory was originally performed on a Chinese sample of 283 university students. Another examination of its psychometric characteristics and factor structure was performed in Italy [53]. The sample consisted of 485 male and female students whose ages ranged between 10 and 27 years. Exploratory and confirmatory factor analyses revealed that the items of the inventory were loaded on five factors: time spent, compulsivity, daily life interference, craving, and sleep interference. The alpha Cronbach reliability coefficient of the whole inventory was 0.94.
The English version of the inventory was translated into Arabic by two bilingual researchers. The accuracy of translation was verified by back translation, which was performed by a third researcher. The retranslated version was then compared with the original English version, and differences were very few. Very few adaptations were made to make the inventory suitable to the Arab environment. As a result, the version used in the study originally had 24 items, measuring 5 dimensions with a 4-point rating scale ranging from 4-'strongly agree' to 1-'strongly disagree'. Thus, a respondent's score on the inventory ranged from 24 to 96. The higher the score of a respondent was, the higher their level of smartphone addiction.
The inventory was validated by having it refereed by specialists and by establishing its construct validity. For construct validation, correlations among items and the total score were computed, and they ranged between 0.63 and 0.85, which were all statistically significant. The unilaterality of the inventory was established by exploratory factor analysis. The results revealed that all items significantly loaded on the first factor. Eigenvalues were 12.632 for the first factor and 1.460 for the second factor. The explained variance of the first factor was 52.635 (90% of the total variance before rotation) and 31.299 (53% of the total variance after rotation). This indicates that the inventory was unilateral. The reliability of the inventory was then checked by computing the alpha Cronbach coefficient of the participants' scores. The inventory yielded an alpha coefficient of 0.982, which indicates that it was highly reliable.
---
Procedures
The authors developed an electronic questionnaire, including the scale on anxiety about COVID-19 infection, the smartphone addiction inventory, and demographic data. The link to the questionnaire was then sent to participants via WhatsApp (Facebook Inc, Menlo Park, CA, USA) and Twitter with the help of authors who live in the four countries included in the study. Completion of the questionnaire took three weeks (the last two weeks of May and the first week of June 2020). The application of the questionnaire coincided with the application of strict health procedures, imposing social distancing and quarantining. Movement between cities was also prohibited in the four countries. Other procedures included the prohibition of gatherings, distant learning, school closures, and restricted travel. The aims of the study and instructions for completing the questionnaire were provided with the electronic questionnaire. Participants were told that the completion of the questionnaire was voluntary, and that data collected from the completed questionnaires would only be used for research purposes. For this reason, they were not required to write their names or give any information about their identities. They were also told that the honest completion of the questionnaire would be the key for the successful completion of the study. Following this, the authors scored and codified the received completed questionnaires, and categorized the data according to the study variables.
---
Data Analysis
The obtained data were statistically analyzed using IBM SPSS Statistics-25 (IBM, Armonk, NY, USA).
To answer the research question about the frequency of anxiety about COVID-19 infection, descriptive measures (frequencies, percentages, means, and standard deviations) were used. The t-test for independent samples was used to identify gender differences in anxiety about COVID-19 infection, and the ANOVA test was used to identify differences in anxiety about COVID-19 infection by country, social status, and workplace. Pearson's correlation was used to explore relationships among variables. Lastly, the multiple stepwise regression test was used to explore the predictive power of the anxiety about COVID-19 infection scale, daily smartphone use hours, and age in smartphone addiction.
---
Results
---
Frequency of Anxiety about COVID-19 Infection among Participants
Table 1 shows means, standard deviations, and percentages of anxiety about COVID-19 infection by country. Table 1 shows that the country with the highest anxiety about COVID-19 infection was Egypt (M = 106.22), followed by Saudi Arabia (M = 98.31), the United Arab Emirates (M = 96.53), and Jordan (M = 93.45).
---
Differences among Countries in Anxiety about COVID-19 Infection
To identify differences among the four countries in anxiety about COVID-19 infection, the ANOVA test was performed. These results are listed in Table 2. The data in Table 2 reveal that there were significant differences among countries in anxiety about COVID-19 infection (p = 0.033, a < 0.05). The effect size was partial eta squared = 0.13. The country variable explained 13% of variance in anxiety about COVID-19 infection. After performing post hoc analysis using the Scheffe test, differences were found to be significant only between Jordan and Egypt (p = 0.037, a < 0.05) in favor of Egypt, of which the mean was higher.
---
Gender Differences in Anxiety about COVID-19 Infection
The t-test was performed to explore gender differences in anxiety about COVID-19 infection in the four countries. Table 3 presents these results. Table 3 shows that there were no statistically significant gender differences (a = 0.05) in anxiety about COVID-19 infection in Saudi Arabia. However, there were significant differences in Jordan (p = 0.007, a < 0.05) and Egypt (p = 0.018, a < 0.05) in favor of females, and in the United Arab Emirates (p = 0.013, a < 0.05) in favor of males. The effect size according to Cohen was small in the Jordan sample (0.363), and average in the Egyptian (0.550) and Emirati (0.626) samples. At the level of the whole sample, there were significant differences (p = 0.013, a < 0.05) in favor of females with a low effect size (0.206).
---
Differences in Anxiety about COVID-19 Infection by Social Status
Differences in anxiety about COVID-19 infection by social status were explored by performing the ANOVA test with three categories: single, married, and divorced. These results are presented in Table 4. Table 4 shows that there were no significant differences (p = 0.364, a < 0.05) in anxiety about COVID-19 infection by social status.
---
Differences in Anxiety about COVID-19 Infection by Workplace
Differences in anxiety about COVID-19 infection by workplace were explored by performing the ANOVA test with three categories: governmental job, private-sector job, and student. These results are presented in Table 5. Table 5 shows that there were no significant differences (p = 0.390, a < 0.05) in anxiety about COVID-19 infection by workplace.
---
Predicting Smartphone Addiction by Anxiety about COVID-19 Infection, Daily Smartphone Use Hours, and Age
Pearson correlations among study variables were computed. A statistically significant (a = 0.01) negative relationship (r = -0.122) was found between age and smartphone addiction. A statistically significant (a = 0.05) negative relationship (r = -0.071) was found between age and anxiety about COVID-19 infection. A statistically significant negative relationship (r = -0.242) was found between age and daily smartphone use hours. Lastly, a statistically significant (a = 0.01) positive relationship (r = 0.427) was found between smartphone addiction and anxiety about COVID-19 infection, and between smartphone addiction and daily smartphone use hours (r = 0.357).
To identify the predictive power of anxiety about COVID-19 infection, daily smartphone use hours and age in smartphone addiction, stepwise multiple regression was used. Table 6 shows these results. Collinearity was checked using the variance inflation factor (VIF), and the value was less than 10 (average VIF = 1), which indicated that the problem of multicollinearity was not present. Anxiety about COVID-19 infection was the best predictor of smartphone addiction, as it could explain 0.181 of the variance in smartphone addiction. The interaction of anxiety about COVID-19 infection and daily smartphone use hours explained 0.273 of the variance in anxiety about COVID-19 infection. Thus, the daily use hours variable could predict an additional amount of smartphone addiction of 0.090, which was significant at the 0.01 level. The prediction equation can be stated as follows: smartphone addiction = 31.160 + 0.168 × anxiety about COVID-19 infection + 1.127 × daily use hours.
---
Discussion
The results of the study revealed that the percentages of participants who had high, average, and low anxiety about COVID-19 infection were 10.3%, 37.3%, and 52.4%, respectively. This refers to an average level of anxiety at the level of the whole sample. Regarding the frequency of anxiety in the four target countries, all frequencies were at the average level, with Egypt being in first place with a mean of 2.655, followed by Saudi Arabia (M = 2.458), the United Arab Emirates (M = 2.4130), and Jordan (M = 2.336). This finding largely concurs with the findings of previous studies conducted in some Gulf states during the outbreak of the pandemic. Those studies reported average anxiety and stress resulting from the pandemic [13,14]. The percentages here were also close to their counterparts in the Chinese study [15], and slightly higher in high anxiety than the percentages in the Italian study [16]. The current percentages, showing average and high anxiety, exceeded their counterparts in the Australian study [25]. Data collection in the present study coincided with the application of strict health procedures in all countries globally, e.g., the prohibition of gatherings and curfews.
Regarding differences in the frequency of anxiety about COVID-19 infection in the four Arab countries, the results revealed significant differences between Egypt and Jordan in favor of Egypt. This finding seems logical given that Egypt, in comparison with Jordan, was late in imposing health restrictions and giving the real numbers of infected cases. Unlike Egypt, Jordan took actions with the appearance of the first infected case. Jordan imposed a curfew and closed schools, governmental institutions, mosques, and airports. Such procedures largely reduced the number of infected cases. Accordingly, the number of infected cases in Jordan up to 7 July was 1167, with a recovery rate of 82% and a death rate of 0.08%. On the other hand, the number of infected cases in Egypt up to 7 July was 76,222, with a recovery rate of 28% and death rate of 4.5%. This may refer to a deficiency in health procedures and the provision of health support to critical cases that required special and costly treatment protocols. Egypt's population is also more than 100 million. The frequency of high anxiety (15.1%) in Egypt exceeded its counterparts in a number of Arab and Asian countries that were covered in previous studies [13][14][15][16]25].
The finding of insignificant differences in the level of anxiety about COVID-19 infection between Saudi Arabia and the United Arab Emirates is in line with the study conducted on Omani and Bahraini samples [14], in which no significant differences were found between the two countries in anxiety about COVID-19 infection. The Saudi environment is largely similar to the Omani and Bahraini environments.
Analysis of the data collected from the whole sample revealed gender differences in anxiety about COVID-19 infection in favor of females. Gender differences were also found in three of the four countries. The differences were in favor of females in the Egyptian and Jordanian samples, and in favor of males in the Emirati samples. No gender differences were found in the Saudi sample. This general finding about females having higher anxiety about COVID-19 infection than males concurs with several previous studies [13,15,16,56]. This finding is also consistent with previous studies exploring gender differences in general psychological anxiety [19,20,[22][23][24]57].
The finding about gender differences in COVID-19 infection anxiety in favor of males in the Emirati sample can be explained by the fact that most respondents in the sample were non-Emirati, who represent about 89% of the total population in the United Arab Emirates. Those respondents live with their families and work in various sectors in the country. Male residents in the United Arab Emirates may have higher levels of anxiety than females do because of fears about their jobs with the economic damages resulting from the pandemic. Some sectors there made some employees redundant or reduced their salaries. For this reason, non-Emirati male employees may fear the loss of their jobs and becoming unable to sustain their families. Women, on the other hand, do not have these fears because women in Eastern societies are not required to work and sustain their families. Furthermore, women stay at home most of the time, which makes them less anxious about catching the infection. This finding is consistent with [14], in which residents had higher levels of anxiety because of the lack of occupational security and being in countries other than their own. This finding is also consistent with studies investigating general anxiety and anxiety about the future, in which males were reported to have higher levels of anxiety than females [21,58].
Males and females in the Saudi sample had comparable levels of anxiety about infection. A possible explanation for this finding is that they live in the same environment and face the same threats. This finding is in line with the Australian study, in which no significant difference in infection anxiety was found [25].
Regarding the effect of social status on anxiety about infection, no significant differences were found among single, married, and divorced participants. This means that anxiety about infection is not affected by one's social status, or being single, married, or divorced. This same finding was reached in [15,16]. It is, however, inconsistent with [25], in which the unmarried had a more significant level of anxiety. This finding also concurs with studies conducted before the coronavirus pandemic, in which the unmarried had higher levels of general anxiety [22][23][24].
As with social status, no gender differences were found in infection anxiety by workplace. Participants working for the government and the private sector and students had comparable levels of anxiety about infection. This finding is consistent with [16], and with studies that did not find differences in general anxiety by workplace [22][23][24]. However, it is inconsistent with [15], in which students outnumbered employees in terms of anxiety, and with [14], in which unemployed respondents outnumbered employees in terms of infection anxiety. Overall, social status and workplace need to be further studied with other variables such as educational level, income, and age because of the inconsistent results about the latter two variables in the few studies conducted so far.
Regarding the predictive power of infection anxiety, daily smartphone use hours, and age in smartphone addiction, stepwise multiple regression analysis revealed that smartphone addiction can be predicted by infection anxiety and daily smartphone use hours. Age, on the other hand, did not contribute to the prediction of smartphone addiction. Infection anxiety was the strongest predictor of smartphone addiction, followed by daily use hours. This means that people who are more anxious about infection tend to excessively use their smartphone. The authors did not find studies exploring the relationship between infection anxiety and smartphone addiction. However, the current study's findings are in line with previous studies that reported a positive correlation between general anxiety, depression, stress, and loneliness on the one hand, and smartphone addiction on the other [9,[27][28][29][30][31][32][33][34]37,39].
This finding seems logical and concurs with the mainstream views from previous research. With the outbreak of the coronavirus, and restrictions such as social distancing and staying at home most of the time, the smartphone can be the only resort for people in order to vent, pass time, and search for information about the virus. The smartphone is also used for distant learning due to the closure of schools. People may therefore excessively use the smartphone to the degree that they cannot control the time spent in its use. This, in turn, can lead to compulsivity, sleep interference, and excessive attachment to the smartphone, which are all symptoms of addiction. This concurs with [35,36], reporting anxiety as a major symptom of smartphone addiction. It also concurs with the assertion of [59] that overdependence on smartphones and the use of social media to know about current events can result in the fear of missing events, known as "fear of missing out". The finding that daily use hours contribute to smartphone addiction is consistent with some studies [42][43][44][45], and is inconsistent with [39], in which a weak relationship was found between smartphone use hours and addiction.
The finding that age did not contribute to the prediction of smartphone addiction despite the presence of a significant negative relationship between them indicates that age is not a factor contributing to smartphone addiction during the coronavirus pandemic. This is in line with most studies that have examined the relationship between age and smartphone addiction [39][40][41].
Lastly, Pearson's correlation revealed a significant, weak relationship between age and infection anxiety. This finding is partly in line with studies in which anxiety was found to be more frequent among young people [16,25,56]. It also concurs with the contention that young people are more prone to anxiety because of their quick access to information via social media [60]. This finding is inconsistent with [14], in which people aged over 40 years were found to be more anxious about infection, and with [15], in which age did not correlate with anxiety.
---
Conclusions
This study explored anxiety about COVID-19 infection and its relationship with some psychological and demographic variables. It revealed that this anxiety exists in some Middle East countries. Regardless of their social status, workplace, and age, participants in the study suffered average-to-high infection anxiety. Some sort of intervention is therefore required so this anxiety does not become morbid. The study also revealed that infection anxiety can lead to smartphone addiction, with all its negative psychological and physical effects, as well as the disorder known as nomophobia. Women were found to be more anxious about infection. Anxiety about infection and daily use hours were found to significantly contribute to smartphone addiction. It is therefore necessary to develop preventive programs to eliminate this phenomenon. Awareness must also be raised about the judicious use of smartphones. People should be advised on how to find alternative ways to fruitfully spend time, control their desire to use smartphones, and sterilize their phones, which can be a source of infection. It is also recommended that social media should be used to support people during the pandemic, and instruct them on how to keep safe and manage their anxiety. They should be told not to show fear and anxiety in front of their children, as this can leave negative effects on their development. People, especially mothers, should be advised not to spend too much time on social media, as this can make them anxious. This can adversely affect childcare, which might cause insecure attachment. During quarantine, people should telecommunicate with relatives to alleviate the impact of social isolation on children and adolescents. People can make an opportunity out of the crisis to practice activities and hobbies for which they previously had no time. Adults should distract children and adolescents from bad news by providing them with daily home activities and events. The use of smartphones and electronic games by children should be monitored so that they do not become addicted to them.
It is a good idea to develop electronic programs to enhance people's psychological hardiness and to teach them how to face crises. There should also be programs of interest for old people to help them safely pass the time. It is also necessary to use social media to spread awareness about the virus and preventive healthcare. In this respect, medical sites do not make good use of social media.
Study results show that further research is required to explore anxiety about infection on larger samples and different populations. Future research is expected to focus on the negative effects of infection anxiety. Research endeavors are also required to develop and test the effectiveness of counseling and preventive programs in eliminating pandemicrelated anxiety. Researchers can also examine the relationship between anxiety about infection and other variables-such as depression, burnout, anxiety about the future and death, psychological security, hardiness, optimism and pessimism, healthy behavior, selfefficacy, and achievement-and attitudes to the vaccination process.
Even though the results of the study documented the relationship between smartphone addiction and gender, the application of the instruments to a limited sample from four Middle Eastern countries ,whose ages ranged between 18 and 73, limits the generalizability of the results to age groups and populations in different contexts. Furthermore, the study was limited to an electronic questionnaire distributed via social media (WhatsApp and Twitter) during the period of restrictions on movement and social distancing. The authors used the available data in the four countries about the numbers of infected cases and deaths up to 7 July 2020. Cautious interpretation of results is important due to the use of a self-reported questionnaire. The results of self-reported questionnaires are prone to be affected by social desirability. Lastly, in this study we used the descriptive-comparative method. Further experimental and longitudinal studies using quantitative and qualitative data collection tools are required.
---
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. All the procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000 (5). Informed consent was obtained from all the patients for being included in the study.
---
Data Availability Statement:
Data of this research will be available on demand.
---
Conflicts of Interest:
The authors declare that they have no conflict of interest. |